This is my tech diary. I try to write about my hobby projects to remember what I've done for reference and for fun. Hopefully techy people with similar interests can benefit as well.
Knockout.js is really powerful. A consequence of being awesome is unfortunately that its magic makes it hard to debug when it doesn't behave as you expect.
Here's a trick that might come in handy if a data binding does not behave properly. You can print the the variables of the view model that is in the current scope by inserting this debugging tag in your HTML source.
I finally took the time this summer to read a book on Scala. I bought Programming in Scala by Martin Odersky, the father of the language, which I think was a good choice. Regardless whether I'll write a lot of Scala programs in the future I learnt some new general programming techniques and a well needed recap on programming language fundamentals from school. After reading it and applied it on a couple of hobby projects I must say that I feel excited.
My first project to play around with the language is a photo collage creator where you supply the program a motive and a set of images to create a collage from. The algorithm tries to puzzle the images together to create a collage that best fits the motive.
The motive to create collage from.
The motive is divided in segments and the image catalogue is searched for best fitting images to puzzle together.
The final collage in low-res.
When printing the collage in high resolution, for example 16384 x 10922 pixels the effect becomes quite cool when you approach the collage from a distance to a near closeup.
Let me just show you an arbitrary Scala function in this program that demonstrates a few of the things that I like compared to my daily work horse Java.
/**
* Calculate the average brightness of a portion of an image.
*
* @param img Image to analyse for average brightness.
* @param startx Start x coordinate of image subset to analyze
* @param starty Start y coordinate of image subset to analyze
* @param stopx Stop x coordinate of image subset to analyze
* @param stopy Stop y coordinate of image subset to analyze
* @return Average brightness of the subset of the image
*/
def brightness(img: BufferedImage, startx: Int, starty: Int, stopx: Int, stopy: Int): Int = {
@tailrec
def estimateBrightness(x: Int, y: Int, maxx: Int, maxy: Int, aggr: Int): Int = {
if (y == maxy)
aggr
else if (x == maxx)
estimateBrightness(startx, y + 1, maxx, maxy, aggr)
else
estimateBrightness(x + 1, y, maxx, maxy, aggr + rgb2gray(img.getRGB(x, y)))
}
/*
* Average the brightness of the number of evaluated pixels with
* the aggregate of their calculated values.
*/
val aggregatedBrightness = estimateBrightness(startx, starty, stopx, stopy, 0)
aggregatedBrightness / ((stopx - startx) * (stopy - starty))
}
As you can see, Scala is statically typed, but the compiler tries to infer as much as possible. You can create a constant variable with the val keyword. But it will in this example figure out that the variable aggregatedBrightness must be of the type (or subclass) Int since it is evaluated via the function estimateBrightness(). You will save yourself a lot of boilerplate declarations.
But what about the function estimateBrightness? It is declared inside the scope of the function brightness(). In Scala a function is on par with any plain old objects and can be referenced via variables or passed as arguments to functions and also be declared inside functions as a consequence. Why wouldn't it always be so?
Everything has a value, even a for loop or an if clause will result in something that can be passed to a variable or statement. This makes for concise and beautiful code.
Scala is basically a functional language but with all the imperative concepts around to make it easy for imperative people like me to make the transition to a more functional style in the tempo that suits me. In this example I made my calculations in a functional style using recursion instead of using loop constructs. I tried to write the program without any for or while loops at all but my conclusion is that just because it is nice to make everything recursive and functional it must not inherently be readable and understandable. I'll stick to my imperative guns when I need them for some time more.
An interesting annotation is @tailrec on the local function declaration. It forces the compiler to verify that this recursive function will be tail-call optimized, meaning that you can be sure that this function will not create stack frames for each invocation in the recusive loop. If so you would be running out of stack after some 10 000 invocations depending on your JVM startup flags.
To be able to write efficient and understandable functional programs my impression is that the requirements of the programmer are heightened compared to programming in plain old Java/C/C++. A challenge I'm gladly willing to continue with.
Instead of me trying to convince you that Scala seems like a great contribution to the Java VM family I strongly recommend you to read the book. You'll definitively become a better C# or Java programmer as well afterwards.
If you want to play around with the Photo Collage creator program and generate som collages of your own clone the code from GitHub.
I have used Eclipse with the Scala plugin which make the program run without any hazzle. To configure it without any code changes, create a directory photos in the module, and put one image in that directory as your motive and name if motive.jpg. Put all your images that will be part of the puzzle in a subdirectory called inputphotos. Run PhotoCollage and monitor the standard out until the program is finished. Run time depends mainly on the number of images in the directory inputphotos.
Finally got my Raspberry Pi! The cheap $25/$35 board with 700Mhz ARM cpu, GPU, 256 mb RAM, dual USB, ethernet and a bunch of general purpose IO pins. Looking awesome in its bare metal and firing it up is no problem. I flashed an SD card with the Raspbian “wheezy” Linux distribution. To write the image to the SD card I used Win32DiskImager from a PC with SD card slot.
After attaching USB keyboard, network cable and HDMI it comes to life by using a micro USB as power supply. The diods flashes and even X runs quite smooth on this limited hardware.
However, after playing around I soon realised I will be much more comfortable working on a distance from my ordinary desktop machine. So how to enable ssh?
A lot of tutorials talks about something as simple as this to enable ssh daemon on boot. sudo mv /boot/boot_enable_ssh.rc /boot/boot.rc
Sorry, I have no such files in my /boot directory. Furthermore, when trying to start the ssh daemon with /etc/init.d/ssh start
it refuses to start. Clues in the startup log are
Could not load host key: /etc/ssh/ssh_host_rsa_key Could not load host key: /etc/ssh/ssh_host_dsa_key
Why these are corrupt I don't know, but it's easy to regenerate them.
Add a user to the authorization file, directly below <tomcat-users> add <user username="system" password="raspberry" roles="manager-gui"/>
Now start Tomcat cd ../bin sudo sh startup.sh
Nice! From your PC (or via a browser on the Pi) surf against the Tomcat console. http://192.168.1.90:8080/
(Figure out the ip address via for example ifconfig)
It takes a short while to warm up the server but then you can login via Manager App. Now, it's business as usual. Upload a war archive and you have a nifty web server running you web application, for $35!
Quick guide how to you inspect iPhone/iOS network traffic
We're in times of more and more reports of malware on our phones. What are the apps on your iPhone actually sending and receiving?
Download a web proxy. I use Fiddler from http://www.fiddler2.com/ on my PC but there's for example Paros which is written in Java if you want to run on all platforms. This tutorial however uses Fiddler.
Install Fiddler and fire it up. Goto Tools -> Fiddler Options and tab Connections. Select "Allow remote computers to connect".
Also notice the port number 8888 or change it something that suites you.
Restart Fiddler.
Now start a command prompt and run ipconfig to find your ip number. Or on a Mac/Linux machine: ifconfig
In your iOS device, goto to your Wifi settings and scroll down to the proxy settings. Choose manual settings and type in the proxy computers ip number and port.
Fire away!
Here's an example of stock information sent by the Stock app on my iPhone. I've chosen the XML view in the reponse inspection to get pretty format.
Compression
Many sites compress their http responses which Fiddler has support for. So in the Inspector view in Fiddler use the raw format tab. I almost always use it anyway but if the response is gzipped there will be a hint in the top of the window to let you unzip it on the fly.
HTTPS
Another obstacle in monitoring traffic can be that the client app and the server communicates over SSL. You won't notice that in the protocol column since Fiddler tells you it's plain HTTP but in the Host column you'll see it says "Tunnel to". There is a way to come around at least some of the SSL problems by enabling "Decrypt HTTPS traffic" in the HTTPS tab in Fiddler Options.
What this really means is that Fiddler will act as a man in the middle and generate SSL server certificates on the fly mimicking the real server. Obviously, your iPhone will not trust the root certificate Fiddler has used to create the fake certificates with so you will be prompted with "Unsecure certificate, possible attacker..." etcetera if you for example would surf against https://www.google.com. Some apps/sites won't even work if they don't trust the certificate.
In Fiddler, you can export the Fiddler root certificate to a cer-file and you could import that to your iPhone to trust it. It would end up under Options -> Profile as a trusted certificate. But I wouldn't recommend you to add unknown certificates as trusted unless you know what you're doing.
Here's how you setup a REST service deployed in the Google App Engine cloud in 15 minutes. The use case in this example is a highscore backend service for my Android game Othello Legends.
Requirements:
We want to create a REST interface for these resources representing a highscore service.
GET http://myapp.appspot.com/api/highscores/
Fetch all applications backed by the highscore service since we want to reuse this for multiple games.
GET http://myapp.appspot.com/api/highscores/myappname
Fetch a sorted listed of highscores for a particular application myappname.
POST http://myapp.appspot.com/api/highscores/myappname
Post a potential new highscore to service. If it makes it to the highscore list it will be saved in database. The data will be sent as query parameters.
Ingredients of the solution:
Google App Engine runs Java and Python. This example will use the Java infrastructure.
So what we'll do is to create a standard Java J2EE web application built for deployment in App Engine backed by a simple DAO to abstract the Google BigTable databases. By using Spring REST together with Jackson we can communicate with JSON in a RESTful manner with minimum effort.
Sounds complicated? Not at all, here's how you do it!
So to create an App Engine web app, click the New Web Application Project icon. Deselect Google Web Toolkit if you don't intend to use it.
Now, we're going to use Spring REST for the REST heavy weight lifting. Download Spring Framework 3 or later from http://www.springsource.org/download. While at it, download the Jackson JSON library from http://jackson.codehaus.org/. Put the downloaded jars in the /war/WEB-INF/lib/ folder and add them to the classpath of your web application.
Now, to bootstrap Spring to handle your incoming servlet requests you should edit the web.xml file of your web application found in war/WEB-INF/.
api
org.springframework.web.servlet.DispatcherServlet
1api/api/*index.html
That will put Spring in charge of everything coming in under path /api/*. Spring must now which packages to scan for Spring annotated classes. We add a Spring configuration file for this and also add some Spring/Jackson config for specifying how to convert from our Java POJOs to JSON. Put this stuff in a file called api-servlet.xml in war/WEB-INF.
Without going into detail, this config pretty much tells Spring to convert POJOs to JSON as default using Jackson for servlet responses. If you're not interested in the details just grab it, but you must adjust the <context:component-scan base-package="se.noren.othello" /> to match your package names.
Now to the fun part, mapping Java code to the REST resources we want to expose. We need a controller class to annotate how our Java methods should map to the exposed HTTP URIs. Create something similar to
import java.util.Date;
import java.util.List;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Controller;
import org.springframework.validation.BindingResult;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.servlet.ModelAndView;
/**
* Controller for Legends app family highscore services.
*/
@Controller
@RequestMapping("/highscores")
public class LegendsHighScoreController {
private static final long serialVersionUID = 1L;
@Autowired
HighScoreService highScoresService;
/**
* @return Fetch all registered applications in the highscore database.
*/
@RequestMapping(value = "/", method = RequestMethod.GET)
public ModelAndView getAllApplications() {
List<String> allApplications = highScoresService.getAllApplications();
return new ModelAndView("highScoresView", BindingResult.MODEL_KEY_PREFIX + "applications", allApplications);
}
/**
* Fetch all highscores for a particular application.
* @param application Name of application
* @return
*/
@RequestMapping(value = "/{application}", method = RequestMethod.GET)
public ModelAndView getAllHighScores(@PathVariable String application) {
List<HighScore> allHighScores = highScoresService.getAllHighScores(application);
return new ModelAndView("highScoresView", BindingResult.MODEL_KEY_PREFIX + "scores", allHighScores);
}
/**
* Add a new highscore to the database if it makes it to the high score list.
* @param application Name of application
* @param owner Owner of the highscore
* @param score Score as whole number
* @param level Level of player reaching score.
* @return The created score.
*/
@RequestMapping(value = "/{application}", method = RequestMethod.POST)
public ModelAndView addHighScores(@PathVariable String application,
@RequestParam String owner,
@RequestParam long score,
@RequestParam long level
) {
HighScore highScore = new HighScore(owner, score, application, new Date().getTime(), level);
highScoresService.addHighScores(highScore);
return new ModelAndView("highScoresView", BindingResult.MODEL_KEY_PREFIX + "scores", highScore);
}
}
So what's the deal with all the annotations? They're pretty self explanatory once you start matching the Java methods to the three HTTP REST URIs we wanted to create, but in short:
@Controller - The usual Spring annotation to tell Spring that this is a controller class that should be managed by the Spring container. All RESTful stuff is contained within the this class.
@RequestMapping("/highscores") - This means that this controller class should accept REST calls under the path /highscores. Since we deployed the servlet under servlet mapping /api in the web.xml this means all REST resources resides under http://host.com/api/highscores
@Autowired HighScoreService highScoresService - Our backing service class to do real business logic. Agnostic that we're using a RESTful front.
@RequestMapping(value = "/{application}", method = RequestMethod.GET) public ModelAndView getAllHighScores(@PathVariable String application) - A method annotated like this creates a REST resource /api/highscores/dynamicAppName where the value given for dynamicAppName is given via the path variable application. The request method specifies that this Java method will be called if this URI is requested via HTTP GET. All ordinary HTTP verbs are supported.
@RequestParam String owner - If you wish to pass query parameters like myvar1=foo&myvar2=bar you can use the request param annotation.
The Java class returned in the ModelAndView response will be automatically marshalled to JSON by Jackson on the same structure as the Java POJO.
Database
Google App Engine uses the Google BigTables behind the scenes to store data. You can abstract this by using the standard JPA annotations on your POJOs. The similar JDO standard can be used as well. I've used JDO in previous projects and it works very well. For this simple server application we will however use the query language to directly access the document database. Here's the code for the first method to fetch all highscores for a particular Legends application. The database can filter and sort via API methods in the query.
@Service
public class HighScoreServiceImpl implements HighScoreService {
@Override
public List<HighScore> getAllHighScores(String application) {
ArrayList<HighScore> list = new ArrayList<HighScore>();
DatastoreService datastore = DatastoreServiceFactory
.getDatastoreService();
// The Query interface assembles a query
Query q = new Query("HighScore");
q.addFilter("application", Query.FilterOperator.EQUAL, application);
q.addFilter("score", FilterOperator.GREATER_THAN_OR_EQUAL, 0);
q.addSort("score", SortDirection.DESCENDING);
// PreparedQuery contains the methods for fetching query results
// from the datastore
PreparedQuery pq = datastore.prepare(q);
for (Entity result : pq.asIterable()) {
String owner = (String) result.getProperty("owner");
Long date = (Long) result.getProperty("date");
Long score = (Long) result.getProperty("score");
Long level = (Long) result.getProperty("level");
list.add(new HighScore(owner, score, application, date, level));
}
return list;
}
That's pretty much it. Run the project locally by right-clicking it and choose Run As -> Web application. Once you are ready to go live create a cloud application by going to https://appengine.google.com/ and Create new application
Now in Eclipse, right click on your project and choose Google -> Deploy to Google App Engine.
You will be asked to supply the name you created in the App Engine administration interface. Wait a few seconds and the application will be deployed in the cloud.
A first version of Othello Legends is now published to the Android Market, check out https://market.android.com/details?id=se.noren.android.othello! The game is the classic Othello/Reversi with a twist for unlocking harder levels when beating opponents. Compete with other players by increasing your total scores.
The aim of the project was to learn how to build an Android application of some complexity with features like OpenGL rendering, integration with backend server, Google AdMob for serving ads, Google Analytics tracking of application usage, SD storage and possibility to run on both phones and tablets with appealing layout and performance.
Lesson learned 1 - 3D
Accelerated 3D graphics is hard even if you're experienced with desktop OpenGL. To make OpenGL ES work on all devices requires a lot of testing which you can't do without devices. And generally debugging accelerated graphics in the Android emulator is no fun. Therefore, use a library to leverage yourself from the low level details. I looked into jMonkeyEngine, libgdx which are general and large frameworks with quite massive APIs which probably would have worked out great but seemed to have some threshold for a newcomer to overcome.
In the end I decided to work with the more limited jPCT which has worked out very well. A stable and reliable library with an active community. jPCT handles 3DS-models well which makes it easy to create environments via some tooling.
I used the open source modeller Blender which is free and has support for all you would need as sculpt modelling and texture UV coordinate tooling. Antother appealing feature of jPCT is that is developed both for Android and as standard Java for desktop so you can port your apps between them without great effort.
Lesson 2 - Revenue model
If you haven't decided whether charging for your app or using ads I can only say that Ads are easy! If you're familiar with Google AdSense for creating ads on your websites you'll find it intuitive to work with Google AdMob. If you have an Android Activity made up of standard Android layouts you can simply add an AdView view to your layout and the Ad library will populate the container with ads.
Compared to the standard Google AdSense interfaces for managing and following up your ad reports AdMob is more limited and not as well polished but who cares? Will revenues be larger with mobile app ads than with ordinary web ads? I'll come back later on that.
Lesson 3 - Mobile is not desktop
Memory is scarse when you go down the 3D pathway. I early discovered that you must be cheap on your textures and the polygon levels of your meshes. The devices have no problem with rendering polygon heavy meshes with impressive framerates, but you soon run out of memory if you don't do clever texture unloads when you don't need them. My lesson here was: Create a game engine with strict modules for each game state so that you can be sure to deallocate all resources when you change state and use more low res textures than you usually would.
Lesson 4 - Tune your app after how your users use it
So in this game each level becomes more difficult and it seems like a good tuning approach to make the difficulties of the first two levels easy enough to to make sure all players passes them. After that it should be exponentially more difficult. How to know how your app users are doing? I notices that the game was way too hard when I tried it on people. Some sort of surveillance would be nice without intruding in the users' privacy. Lesson here is to not invent anything new. By using Google Analytics you can track how users travel around in your application by marking different states as you would use Google Analytics to mark web pages in a web site to follow traffic around your site and adapt your game to how users respond.
Lesson 5 - Android is not Java
Another more depressive lesson learned is that when you plan to reuse some Java library first make a Google search whether anyone has had difficulties using it on the Android platform. For example the JSON marshaller Jackson proved to be hard to use.
My current Android application project is starting to make sense. Unfortunately it crasches after a few levels of playing due to java.lang.OutOfMemoryError. Up to that point I hadn't put much thinking into the memory model of Android applications and simply consumed memory without hesitations. I've now been forced to rewrite some critical parts of the application and i thought I'll write a few words to remember the most useful tools I came across.
First of all, Android apps have small heaps. And with different sizes, it's up to the vendor of the device to decide. Here's a few numbers I came across:
G1 = 16 Mb
Droid = 24 Mb
Nexus One = 32 Mb
Xoom = 48 Mb
GalaxyTab = 64 Mb
So you see that allocated heaps are far from using the entire RAM of the devices since no application should be able to clog the system. The natural approach to solving a memory problem would be to increase the heap but that is not so easy. If you have a rooted phone you may edit
/system/build.props
and set the heap size via dalvik.vm.heapsize=24m
Or, if you're running on a tablet (3.x) Android version there is a manifest setting to ask for a large heap
but that is no guarantee and you will instead be punished with longer GC cycle times.
On the other hand, changing the VM heap size in your emulator is easy, and could be a good thing in order to verify that your app works on devices with smaller heaps. To do that, fire up your Android SDK and AVD Manager and click edit on your virtual device. Under hardware, there is a setting Max VM application heap size.
So the conclusion is that you have to live with small heaps and limited memory. How to get an estimate of your consumed memory and how much there is available then?
Run your application in the emulator or connect your real device via USB and use the Android Debug Bridge (adb). It's located in your Android SDK tools folder.
To dump memory info for all your running applications use $>adb shell dumpsys meminfo
To understand this table we must know that you have a managed heap, dalvik, and a native heap. For example some graphics are stored in native heap. But important, it is the sum of these heaps that can not exceed the VM heap size. so you can't fool the runtime by putting more stuff in either native or managed heap. So to me, the most important numbers are the number under dalvik and total above. The dalvik heap is the managed VM heap and the native numbers are memory allocated by native libraries (malloc).
You'll probably see these numbers fluctating each time you run the command, that is because objects are allocated by the runtime all the time but GCs are not run particularly often. So, in order to know that you really have garbage collected all unused objects you must either wait for the Android debug log in logcat to say something like GC_FOR_MALLOC or GC_EXTERNAL_MALLOC or similar to that which indicates that the GC has been invoked. Still, this does not mean that all unused memory has been released since the inner workings of the GC might not have done a complete sweep.
You can of course ask for a GC programmatically by System.gc();
But that is never a good option. You should have trust in the VM to garbage collect for you. If you for example want to allocate a large memory chunk the gc will be invoked if necessary.
You can force a gc using the Dalvik Debug Monitor (DDMS). Either start it from Eclipse or from the ddms tool in the Android SDK installation folders.
If you can't see your process right away, go to menu Actions and Reset adb. After that you can turn on heap updates via the green icon Show heap updates. To force a GC, click on Cause GC.
If you wish to monitor the memory usage programmatically there are a few APIs you can use.
ActivityManager.getMemoryInfo() can be used to get an idea of how the memory situation is for the whole Android system. If running low on the gauges you can expect background processes to be killed off soon.
For example, to see how much memory is allocated in the native heap, use: Debug.getNativeHeapAllocatedSize()
So back to DDMS. This tool can also create heap dumps which are particulary useful when tracing down memory leaks. To dump the heap, click on the icon Dump HPROF file. There are many tools for analyzing heap dumps but the one I'm most familiar is the Eclipse Memory Analyzer (MAT). Download it from http://www.eclipse.org/mat/.
MAT can't handle the DDMS heap dumps right away, but there is a converter tool in the Android SDK.
So simply run this command.
Then you can open the converted heap dump in MAT. An important concept here is retained size. The retained size of an object found in the heap is how much memory could be freed if this object could be garbage collected. That includes the object itself, but also child objects which no other objects outside of the retained set has references to.
MAT gives you an overview of where your memory is allocated and has some good tooling on finding suspicious allocations that could be memory leaks.
So to find my memory leak, I used the dominator tree tab which sorts the allocated objects by retained heap
and I soon discovered that the GLRendered object held far too many references to a large 512x512 texture.
The tool becomes even more valuable when the leaking objects are small but many. The dominator tree tell you right away that you have a single object holding a much larger retained heap than you would expect it to.
If you want to learn more, check out this speech by Patrick Dubroy on Android memory management from Google IO 2011 where he explains the Android memory model in more detail.