How To Instal A Git Repo & Serve Via SSH

Love Git but don’t want to pay GitHub to have a private repo? No problem. Here’s the solution. I was looking for a way to create a repo and serve it on my server via ssh. Git made it really simple and we can do in 3 steps (copied from the [Reference 1]):

1. Create a repo

server $ mkdir ~/repos/
server $ cd ~/repos/
server $ GIT_DIR=project.git git init
server $ cd project.git
server $ git --bare update-server-info
server $ cp hooks/post-update.sample hooks/post-update

2. Clone it on the client side via SSH

client $ git clone user@server:~/repos/project.git # Check the [Reference 2]
client $ mkdir project
client $ cd project
client $ git init
client $ git remote add origin user@server:~/repos/project.git/

3. Code & Push

client $ touch README
client $ git add README
client $ git commit -m "Example."
client $ git push origin master


1. How to serve a Git repo via SSH:
2. How to map SSH identity files to SSH servers & usernames:

Listing all header files your cpp depends on

I needed to extract certain Boost headers from its huge code base for memory mapped file & shared memory containers. The first idea came to my mind was a simple grep command for ‘#include’ statements and then parse the paths. This is simple but not so useful for the cases when headers are conditionally included. For example:

#ifndef NO_STL
#include "my_class_no_stl.h"
#include "my_class_stl.h"

To be honest, grep won’t be able to handle it. We need a full fledged C++ pre-processor to correctly include headers and pass necessary values/definitions (it’s done with -D for g++). I was struggling a bit and got a tip from Ralph. It turned out to be very simple:

g++ -D NO_STL -I A_PATH -M source.cpp # to get all headers, including system headers
g++ -D NO_STL -I A_PATH -MM source.cpp # to get all headers, except system headers
g++ -D NO_STL -I A_PATH -H source.cpp # to print all headers nested in as a tree


How I picked up TDD & Google Test in a few hours

TDD Diagram
TDD Diagram

To confess, this is the first time I write unit testing for Java & C++ and it turned out this was rather simple. This weekend Daniel is hosting hackathon at his place and invited Kong & me to come over. The first thing we did was TDD. Daniel was fluent in TDD & pair-programming as his company used Ruby and actively worshiped agile development.

I was given a task: To implement a set data structure in Java as simple as possible to pass the test cases that he would write to challenge the implementation. The first test case was to assert emptiness of a new set via the method `bool isEmpty`. Returning `true` was easy. Then Daniel wrote another task to add a new element and requested the set must return `false`. I added a count & array as private data members. Daniel said I should implement as simple as possible. So I used private one member _isEmpty and set it to `true` by default and `false` when invoking `add(int value)`.

And we went on to add test cases to cover methods `int count()`, `void remove(int value)`, `int getIndexOf(int value)`… I got 2 bugs and JUnit was able to point out which test cases failed and I was able to locate the bugs with ease.

I got excited and then moved to C++ and installed Google Test framework. It was too simple and I regret not using it earlier. Read my TDD C++ code, you will find it super simple. Here’s the screenshot of the test program:

Passed All Test Cases (Using Google Test)

The real value is seen when refactoring was involved in the process. TDD approach & the written test cases greatly aided me in verifying that code refactoring worked like the old code. When it comes to real life projects with hundeds of modules and millions of lines of code, refactoring without test coverage is like walking into a minefield. If you were diligent enough to write test cases to cover all functions and methods, then you can refactor at will and then run against the accumulated test cases with confidence.

However, one must be clear that test cases guarantee only the cases they cover. Passing the refactored code against the test cases doesn’t guarantee correct outcomes of untested cases. So, be diligent and creative enough to write quality test cases to cover all possible cases to exactly define the behavior of the functions & methods.

Lastly, TDD enforces no optimization. TDD requires you to write simplest possible code to pass all the test cases. Refactor along the way and pass the test cases. Profile when you need performance. That’s not to downplay the importance of throughout & mindful design of algorithms and data structures. One must find a good balance between TDD & efficient implementation that is crafted since the very beginning, not leaving refactoring to the very late stage.

Edit: If the above C++ Set example is oversimplified for you, follow its sibling WordPath

WebSocket Being Served Via NginX Proxy Now

The NginX server from the Ubuntu LTS repo was 1.1.x and did not support WebSockets. The minimum version of NginX that supports WebSockets is 1.3.13. Note that supports comes in the proxy module. I upgraded to 1.4.1 by compiling from source and the proxy module is enabled by default. Now you can taste WebSockets via NginX proxy at . Only valid users can log in and fully experience it. If you want to test our new app, comment below.

The WebSocket proxy configuration I used:

server {
    listen 80;
    root /var/www/;


    location / {
        proxy_pass http://localhost:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;

For the curious, here are the configuration flags I used to compile NginX 1.4.1 (remember to install ssl dev packages)

./configure --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --prefix=/usr --pid-path=/run/ --error-log-path=/var/log/nginx/error.log --with-openssl=/usr/lib/ssl


Why does Apache link sites-enabled/000-default to sites-available/default?

I was cleaning up Apache config and removed the default 000-default & default-ssl from the Apache sites-enabled sub-folder. To my surprise the default page now goes to a web app that I did not expect. It took me a sec to realize 2 things:

  1. The first VirtualHost config that has format *:80 (HTTP) or *:443 (HTTPS) was considered the default site
  2. Apache loaded files in alphabetical order

So I prepended my blog config file with 000- and now blog is the default site across

Edit: This site is now powered by NginX