Saturday, 28 December 2013

Git GitHub Cheat Sheet

Some of the most frequent git commands I use, but also forget between the times I need them.

Setup a repository

git clone <repository>
Clone the specified repo to current location. Think CVS checkout.
To clone a GitHub repository, copy the link to the repo on the GitHub site.
git clone https://github.com/johannoren/WeatherStation.git

If you instead start by creating your project locally you initialize git by
mkdir MyProject
cd MyProject
git init

To add a file
touch foo
git add foo
git commit -m "First commit"

To connect your local repository to GitHub (or other remote repository) assuming we have created a MyProject project on GitHub we create a remote named origin.
git remote add origin https://github.com/johannoren/MyProject.git

To push our local commited file to the remote origin on the main branch master
git push origin master

To clone a repository on one of your other machines, do this over ssh
git clone ssh://johan@192.168.1.6:/home/johan/gitrepos/MyRepo.git

Git configuration

git config -e
Open the Git config file in the default editor

git config --global user.name 'Johan Noren'
git config --global user.email abc@gmail.com
Edit the author and email when making commits

git config --list
List all Git configuration options

If you add --global after these config commands, they will be applied to all repositories and not only the current repo.

Diffs, shows, blames, history and info

git diff
What has changed since your last commit? To diff only a file use
git diff -- <filename>

git status
List added files to the staging area, changed files and untracked files.

git log
Show the most recent commits. Comes with a collection of options:
--color           Color coded
--graph           Commit graph added to the left
--decorate   Add branch and tag names to commits
--stat             Show files changed, inserted, deleted
-p                       Show all diffs
--author="Johan Noren"  Commits by a certain author
--after="MMM DD YYYY" ex. ("Jun 20 2008") Filter commits by date
--before="MMM DD YYYY"
--merge        Filter out commits occuring in the current merge conflict.

git show <revision>
Show the diff of a commit specified by <revision>. Revision can be any SHA1 commit ID, branch name, or tag.

git blame <filename>
Show the author of each line in a file.

git ls-files
List all files version controlled

Add and delete files

git add <file> <file> ...
Add files to the git project

git add <directory>
Add all files in direcotry including all subdirectories

git add .
Add all created/modified files in current directory (not deleted)

git add -u  
Add to index only files deleted/modified and not those created

git add -A  
Do both operation at once, add to index all files

git add -p
Patch mode allows you to stage parts of a changed file, instead of the entire file. This allows you to make concise, well-crafted commits that make for an easier to read history.

git rm <file> <file>
Remove the files from the git project.

git rm $(git ls-files --deleted)
Remove all deleted files from the git project

Stage and commit

git add <file1> <file2> ...
git stage <file1> <file2> ...
Add changes in <file1>, <file2> ... to the staging area which will be included in the next commit

git add -p
git stage --patch
Walk through the current changes (hunks) and decide for each change which to add to the staging area.

git reset HEAD <file1> <file2> ...
Remove files from the next commit

git commit <file1> <file2> ... [-m <message>]
Commit <file1>, <file2> ... optionally using commit message <msg>

git commit -a
Commit all files changed since your last commit not including new (untracked) files.

Branches and merging

git branch
List all local branches

git branch -r
List all remote branches

git branch <branchname>
Create a new branch named <branchname>, referencing the same point in history as the current branch.

git branch <branch> <start-point>
Create a new branch named <branch>, referencing <start-point>, which may be specified using a branch name or a tag name and more.

git push <repo> <start-point>:refs/heads/<branch>
Create a new remote branch named <branch>, referencing <start-point> on the remote. Repo is the name of the remote.
Examples:
git push origin origin:refs/heads/branch-1
git push origin origin/branch-1:refs/heads/branch-2
git push origin branch-1 ## shortcut

git branch -d <branchname>
Delete branch <branchname>

git branch -r -d <remote-branch>
Delete a remote-tracking branch. Example
git branch -r -d branchname/master

git checkout <branchname>
Update the working directory to reflect the version referenced by <branchname> and make the current branch <branchname>

git checkout -b <branchname> <start-point>
Create a new branch <branchname> referencing <start-point>, and check it out.


git merge <branchname>
Merge branch <branchname> into the current branch.

Tags

git tag
List all available tags

There are two types of tags, lightweight and annotated. Less information is stored in lightweight tags so always use annotated '-a' if unsure.
git tag -a v1.0_20140102_2122 -m 'Tag information'

To push the tag to GitHub use
git push origin <tagname>
or if you have many tags or if you are lazy, push all tags
git push origin --tags

Handle remote repositories

git fetch <remote>
Update the remote-tracking branches for <remote> (defaults to "origin"). Does not initiate a merge into the current branch, see "git pull".

git pull
Fetch changes from the server, and merge them into the current branch.

git push
Update the server with the commits across all branches that are common between the local copy and the server. Local branches that were never pushed to the server are not shared.

git push origin <branch>
Update the server with your commits made to <branch> since your last push. This is always required for new branches that you wish to share. After the first explicit push, "git push" is sufficient.

git remote add <remote> <remote_url>
Add a remote repository to your git config.  Can then be fetched locally.
git remote add myworkteam git://github.com/somename/someproject.git
git fetch myworkteam

Revert local changes

Assuming you did not commit the file, or add it to the index, then:
git checkout filename

Assuming you added it to the index, but did not commit it, then:
git reset HEAD filename
git checkout filename

Assuming you did commit it, then:
git checkout origin/master filename

Assuming you want to blow away all commits from your branch (VERY DESTRUCTIVE):
git reset --hard origin/master

Friday, 20 December 2013

CouchDB on Linux Mint - Cheat sheet

Here's my cheat sheet for working with CouchDB on a Linux Mint server.

Installation

Installation is trivial
sudo apt-get install couchdb -y

Test to see that it works.
curl http://127.0.0.1:5984/

should return something like

{"couchdb":"Welcome","version":"1.2.0"}

Create a database named 'johantest'
curl -X PUT http://127.0.0.1:5984/johantest

Delete the same database
curl -X DELETE http://127.0.0.1:5984/johantest


Adding documents

Assume you didn't delete the database, add a document like this
curl -X POST http://127.0.0.1:5984/johantest \
-H 'Content-Type: application/json' \
-d '{"SomeProperty":"Test1","AnotherProperty":"Test2","Age":32,"PropList":["Prop1","Prop2","Prop3"]}'

To retrieve all documents in a database

curl -X GET http://127.0.0.1:5984/johantest/_all_docs

Gives
{"total_rows":1,"offset":0,"rows":[
{"id":"4571d772c7d61fa34a7579ce6f04d47b","key":"4571d772c7d61fa34a7579ce6f04d47b","value":{"rev":"1-a268e2541be30dae295fb88015bd1a5c"}}
]}

That will only give references so you can GET each document detail. To fetch the contents of an individual document, fetch using the key from the provious GET.
curl -X GET http://127.0.0.1:5984/johantest/4571d772c7d61fa34a7579ce6f04d47b

Gives
{"_id":"4571d772c7d61fa34a7579ce6f04d47b","_rev":"1-a268e2541be30dae295fb88015bd1a5c","SomeProperty":"Test1","AnotherProperty":"Test2","Age":32,"PropList":["Prop1","Prop2","Prop3"]}

 If you wish to fetch all documents in one large JSON blob
curl -X GET http://127.0.0.1:5984/johantest/_all_docs?include_docs=true 

Gives
{"total_rows":1,"offset":0,"rows":[
{"id":"4571d772c7d61fa34a7579ce6f04d47b","key":"4571d772c7d61fa34a7579ce6f04d47b","value":{"rev":"1-a268e2541be30dae295fb88015bd1a5c"},"doc":{"_id":"4571d772c7d61fa34a7579ce6f04d47b","_rev":"1-a268e2541be30dae295fb88015bd1a5c","SomeProperty":"Test1","AnotherProperty":"Test2","Age":32,"PropList":["Prop1","Prop2","Prop3"]}}
]}

Settings and logs

Settings are stored in /etc/couchdb/default.ini
To be able to connect to the databse via HTTP from other machines on network, change
bind_address = 0.0.0.0

The actual database is stored by default at
/var/lib/couchdb/1.2.0 and is named in this case johantest.couch

If you wish to move it to another partition or similar, change the settings
database_dir = /var/lib/couchdb/1.2.0
view_index_dir = /var/lib/couchdb/1.2.0

To get a feel of the amount of disk used by Couch I notice that my Weather Station application has stored 8800 JSON documents in a format like this

{"id":"6ea52d20ab226121a325d0e3c5ffe4b5","key":"6ea52d20ab226121a325d0e3c5ffe4b5","value":{"rev":"1-087372f223546fb03c75605decb4a442"},"doc":{"_id":"6ea52d20ab226121a325d0e3c5ffe4b5","_rev":"1-087372f223546fb03c75605decb4a442","temperature":23.75858110517378563,"rawDate":1385961273368,"key":"holmon"}}

That will occupy 28.9 MB of disk with the default settings of compression etcetera.

To see what requests has been made against the database, the log is placed at
 /var/log/couchdb/couch.log
which can be changed in the settings file as well. Output for the tests above

[Sat, 28 Dec 2013 09:08:41 GMT] [info] [<0.2082.8>] 127.0.0.1 - - PUT /johantest 201
[Sat, 28 Dec 2013 09:34:03 GMT] [info] [<0.28823.7>] 127.0.0.1 - - DELETE /johantest 200
[Sat, 28 Dec 2013 09:34:07 GMT] [info] [<0.25248.7>] 127.0.0.1 - - PUT /johantest 201
[Sat, 28 Dec 2013 09:34:13 GMT] [info] [<0.3856.8>] 127.0.0.1 - - POST /johantest 201
[Sat, 28 Dec 2013 09:34:22 GMT] [info] [<0.2437.8>] 127.0.0.1 - - GET /johantest/_all_docs?include_docs=true 200
[Sat, 28 Dec 2013 09:35:29 GMT] [info] [<0.2449.8>] 127.0.0.1 - - GET /johantest/_all_docs 200
[Sat, 28 Dec 2013 09:35:41 GMT] [info] [<0.2339.8>] 127.0.0.1 - - GET /johantest/4571d772c7d61fa34a7579ce6f04d47b 200


Thursday, 19 December 2013

Clone a GitHub project into Eclipse

I've been using the Git command line tools to setup my workspaces against GitHub and then importing projects into Eclipse going back and forth to the command line to diff, commit and push back to origin. In my current project I've discovered the Git Repository Explorer in Eclipse and how it makes many of the tasks much easier.

To clone an existing GitHub repository add the perspective Git Repository Explorer

Choose to clone a repository
Add details about the remote repository
The URI can be found on the GitHub page for the repository you wish to clone

Select the branch to check out and in the next view tick the "Import all existing projects after clone finishes"
Then you got your project in your workspace as expected


Now the usual Eclipse goodies har here for you, to diff local changes is much clearer than the command line diff utility for example

The synchronize option to see all changes in your local copy

Inspecting the history of the file is also presented in a more readable manner


Commit and push to origin (GitHub in this case) is a single operation if you wish

Thursday, 14 November 2013

Spice up your comments

I was searching for something on StackOverflow and found this off-topic thread on comments.
A few gems:


Exception up = new Exception("Something is really wrong.");
throw up; //ha ha


//When I wrote this, only God and I understood what I was doing
//Now, God only knows


// 
// Dear maintainer:
// 
// Once you are done trying to 'optimize' this routine,
// and have realized what a terrible mistake that was,
// please increment the following counter as a warning
// to the next guy:
// 
// total_hours_wasted_here = 42
// 


//Mr. Compiler, please do not read this.


// I dedicate all this code, all my work, to my wife, Darlene, who will 
// have to support me and our three children and the dog once it gets 
// released into the public.

// drunk, fix later


// Magic. Do not touch.


return 1; # returns 1

// If I from the future read this I'll back in time and kill myself.


double penetration; // ouch

/////////////////////////////////////// this is a well commented line

// I am not sure if we need this, but too scared to delete. 

// I am not responsible of this code.
// They made me write it, against my will.

//Dear future me. Please forgive me. 
//I can't even begin to express how sorry I am. 

options.BatchSize = 300; //Madness? THIS IS SPARTA!

// I have to find a better job

// hack for ie browser (assuming that ie is a browser)

} catch (PartInitException pie) {
// Mmm... pie
}

// John! If you'll svn remove this once more,
// I'll shut you, for God's sake!
// That piece of code is not “something strange”!
// That is THE AUTH VALIDATION.

try {

}
catch (SQLException ex) {
// Basically, without saying too much, you're screwed. Royally and totally.
}
catch(Exception ex)
{
//If you thought you were screwed before, boy have I news for you!!!
}

// Catching exceptions is for communists

// If you're reading this, that means you have been put in charge of my previous project.
// I am so, so sorry for you. God speed.


/**
* For the brave souls who get this far: You are the chosen ones,
* the valiant knights of programming who toil away, without rest,
* fixing our most awful code. To you, true saviors, kings of men,
* I say this: never gonna give you up, never gonna let you down,
* never gonna run around and desert you. Never gonna make you cry,
* never gonna say goodbye. Never gonna tell a lie and hurt you.
*/


// If this code works, it was written by Paul. If not, I don't know who wrote it


/**
* If you don't understand this code, you should be flipping burgers instead.
*/


//Abandon all hope yea who enter beyond this point



catch (Ex as Exception)
{
// oh crap, we should do something.
}

// TODO make this work


// This is crap code but it's 3 a.m. and I need to get this working.

Sunday, 22 September 2013

Netflix on Linux Mint 14

Netflix uses Microsoft Silverlight in their web version which makes it impossible to watch Netflix on Linux machines since no native client is available. The Pipelight project has released a plugin that emulates Silverlight using the Netscape Plugin API and wine. So all browsers supporting the Netscape Plugin API like Firefox and Chrome should be able to use this trick.

This is tested on my Linux Mint 14 HTPC with Chrome and works like a charm. It is even faster then the earlier Linux Desktop Netflix client that was available as apt-get install. 

If you have an earlier version of Pipelight first remove it

sudo apt-get remove pipelight

Install the plugin and enable it

sudo apt-add-repository ppa:ehoover/compholio
sudo apt-add-repository ppa:mqchael/pipelight
sudo apt-get update
sudo apt-get install pipelight-multi
sudo pipelight-plugin --enable silverlight

That should be it. But the Netflix web site has a check that your browser is a Windows based browser. Emulate this by installing a plugin that enables you to choose which user agent your browser should mimick. I use the User Agent Shifter for Chrome.

If you use this one, switch to Windows Firefox 15 or similar in the menu.

Pipelight also supports Adobe Flash, so if you have need for that, install it via

pipelight-plugin --enable flash

Sunday, 14 April 2013

Simple backups with rsync

Here's my notes on how I setup my home network backup system in case I forget it.
I was first thinking of backing everything to some cloud service like Google Drive or Dropbox, but it will be quite expensive since I have too much home movies, images and sound recordings. So what I've done is that I have one backup of the stuff on a USB harddrive and another duplicate on the HTPC server. The same technique could be used to backup to a server at a friends house to make the backups completely fire proof.

Using the old Unix command rsync it is really easy to automate this. I installed rsync via cygwin, google for cwrsync which is rsync for Windows. My HTPC is a Linux Mint server so I created a RSA keypair for my Linux user and stored the keys on my Windows desktop machine in directory c:\docume~1\Johan\.ssh\id_rsa_rsync_johanhtpc.

So now we can authenticate against the HTPC with ssh which rsync supports and I only need to create a configuration of what I want to synchronize.

I created a config command file like this.


@ECHO OFF

SETLOCAL

SET CWRSYNCHOME=%ProgramFiles(x86)%\CWRSYNC

SET CYGWIN=nontsec

SET HOME=%HOMEDRIVE%%HOMEPATH%

SET CWOLDPATH=%PATH%

SET PATH=%CWRSYNCHOME%\BIN;%PATH%



rsync -av --chmod u+rwx -e "ssh -i c:\docume~1\Johan\.ssh\id_rsa_rsync_johanhtpc" "/cygdrive/e/Dokument" johan@johanhtpc:PCBackups

rsync -av --chmod u+rwx -e "ssh -i c:\docume~1\Johan\.ssh\id_rsa_rsync_johanhtpc" "/cygdrive/e/Musik" johan@johanhtpc:PCBackups

rsync -av --chmod u+rwx -e "ssh -i c:\docume~1\Johan\.ssh\id_rsa_rsync_johanhtpc" "/cygdrive/e/Programmering" johan@johanhtpc:PCBackups

rsync -av --chmod u+rwx -e "ssh -i c:\docume~1\Johan\.ssh\id_rsa_rsync_johanhtpc" "/cygdrive/e/Bilder" johan@johanhtpc:PCBackups

rsync -av --chmod u+rwx -e "ssh -i c:\docume~1\Johan\.ssh\id_rsa_rsync_johanhtpc" "/cygdrive/e/Audiobooks" johan@johanhtpc:PCBackups

rsync -av --chmod u+rwx -e "ssh -i c:\docume~1\Johan\.ssh\id_rsa_rsync_johanhtpc" "/cygdrive/c/ws" johan@johanhtpc:PCBackups

rsync -av --chmod u+rwx -e "ssh -i c:\docume~1\Johan\.ssh\id_rsa_rsync_johanhtpc" "/cygdrive/c/recordings" johan@johanhtpc:PCBackups

rsync -av --chmod u+rwx -e "ssh -i c:\docume~1\Johan\.ssh\id_rsa_rsync_johanhtpc" "/cygdrive/c/render" johan@johanhtpc:PCBackups


So when run, rsync will check each folder on the Windows harddrive or external harddrive against the directory on the backup server. If new files have been created or existing files are updated they will be synchronized. No action will be taken for directories or files that are unchanged.

To automate this, I used the Windows scheduler. I added a task that will run each night at 00:59. It will simply run the above command.






Saturday, 2 February 2013

Create server backdoors using SQL Injection

If you're a web programmer you are probably aware of the most common security mistakes we make. OWASP keeps statistics on what exploits are the most common. If you're not familiar with these, this should be mandatory reading https://www.owasp.org/index.php/Top_10_2010-Main.

As shown, injections are still one of the worse problems. If you're not familiar with SQL injection check out some basic ways to exploit it on vulnerable site. The examples are often about trying to select some sensitive data and getting the data to be rendered on the vulnerable site. 

I'm not a black hat hacker so I've always thought about SQL injection as something primarily putting the site and it's data into danger. But tag along to see that SQL injection can be the entry point of pwning the complete server and getting inside the firewall and the internal network.

Now, I was reading up on SQL the other day for a project at work and stumbled upon some SQL syntax I didn't know about. Combining this with an SQL injection vulnerability could be dynamite.

So first, assume you found a weakness on a site. There are tools for that, but basically try to append code to request parameters like ' or 'foo'='1 or similar to look for server crashes giving you a hint of SQL injection problems like Unknown column 'foo' in 'where clause'. Now you would "normally" start the tiresome work of finding something valuable in the database.

But, with the SQL syntax INTO FILE you can write files. Nice. So depending on what technology the server is based on you could write files that can be accessed via the web interface. If the file names in the URI don't give away what technology used look at the HTTP header value of Server. If for example the header talks about JBoss you can guess that the site is Java based and we could try to create JSP files. Similarly you could aim for creating PHP script files etc if that's whats dished out by the server.

So, by using something like this in the injection exploit (the red is what is supplied by the hacker via the request parameter injection)

SELECT a, b FROM someunknowntable WHERE someunknowncolumn = '' UNION SELECT '<?php system($_GET["cmd"]); ?>' INTO OUTFILE '/var/www/htdocs/pwn.php'; --

or similar for other scripting technologies like JSP, ASP etc you have created a public backdoor.
Catastrophe! 
Point you're browser to http://thesite.com/pwn.php?cmd=pwd
to print out the currect working directory of the web server process.

Now only imagination stops you. cmd=cat /etc/passwd /etc/shadow to dump all user credentials. If the web server is running as root it's too easy to start creating misery like keylogging the other users or dumping all databases.

The lesson from this is that if you have five web servers and databases hosting different sites on your server it is the weakest link of them that defines the total security. So by hacking the not so important server with an SQL injection weakness you can get to the data of the highly secured applications with no SQL injection weaknesses via a backdoor.

Some thought on avoiding this kind of problems
  • Always use frameworks and libraries that removes the possibility of SQL injection. Java has prepared statements or some of the ORM technologies. The other languages has their own ways.
  • Run the database process as a user with low file access priviligies so that it can't write files anywhere it shouldn't be able to. Or even better, run it on a separate machine.
  • Same for the web server, don't run it as root. There are other ways of hacking web or application servers to gain shell access.
  • A general good thought is not to reveal to much information about what technology serves the site if possible to make it harder to exploit knowledge about how it behaves. For example, don't show server versions in HTTP headers, don't show crash stacktraces in server responses in production mode and so on.







Friday, 4 January 2013

Sink Hole Animation

I made a little animation which fakes a sink hole in the platform outside the Umeå Östra train station. This relies on camera tracking which takes a real scene and incorporates it in a simulated 3D environment.


Prerequisities 

A camera, (in my case a Canon 550D), some bright post-its, Blender (free and open source), Gimp or other image processing software and a few hours of modelling time and a nights sleep for rendering the scene.

Pick a scene

Figure out a scene. To make the camera tracking easiser to automate, pick a bright scene so that you will have short shutter times. Also, (which I failed to follow) try to keep your camera steady and make pans slow and steady to avoid blurry and unsharp images in your film. Record in highest possible resolution of your camera.

Object tracking

You must find some high contrast objects in the scene which you will use to track during the movement of the real camera in order for Blender to figure out the relative position of these objects. Preferrably use objects which will have parallax shifts when you move around in the scene. Also use a few objects that lie in your ground plane (the train platform) so that you can create a correct coordinate system.

In this scene, I want good resolution around the sink hole, but there are few high contrast objects, so I scattered some yellow pieces of post it notes around the area. 

Record your movie, import it into Blender and use the new Movie Clip Editor, which has come in one of the latest releases, to tag tracking objects. In the best of worlds, the software can follow the movement of the high contrast objects through the whole movie clip. Didn't work like a charm for me, had to manually help the tracker when the camera panned fast, so that's a note to self for future trackings. When you solve the system, Blender will figure out how the camera has moved during the photo shoot and create a camera animation for that for the virtual camera used later when rendering.

Modelling and rendering

I made a real simple hole model in Blender with some gravel textures overlaying eachother. Then by using the composition features of Blender you can merge each frame in the original movie clip with the rendered model with the correct camera position. In this example, the large disk in the image representing the ground is not rendered in the final composition, but it can receive renderable shadows, so that you can have your own 3D objects cast shadows onto the ground in the movie.

For reference, the original movie looks like


This is based in an excellent explanation of the Blender camera tracking functionality by Andrew Price on BlenderGuru.