Love Git but don’t want to pay GitHub to have a private repo? No problem. Here’s the solution. I was looking for a way to create a repo and serve it on my server via ssh. Git made it really simple and we can do in 3 steps (copied from the [Reference 1]):
1. Create a repo
server $ mkdir ~/repos/
server $ cd ~/repos/
server $ GIT_DIR=project.git git init
server $ cd project.git
server $ git --bare update-server-info
server $ cp hooks/post-update.sample hooks/post-update
2. Clone it on the client side via SSH
client $ git clone user@server:~/repos/project.git # Check the [Reference 2]
client $ mkdir project
client $ cd project
client $ git init
client $ git remote add origin user@server:~/repos/project.git/
I needed to extract certain Boost headers from its huge code base for memory mapped file & shared memory containers. The first idea came to my mind was a simple grep command for ‘#include’ statements and then parse the paths. This is simple but not so useful for the cases when headers are conditionally included. For example:
To be honest, grep won’t be able to handle it. We need a full fledged C++ pre-processor to correctly include headers and pass necessary values/definitions (it’s done with -D for g++). I was struggling a bit and got a tip from Ralph. It turned out to be very simple:
g++ -D NO_STL -I A_PATH -M source.cpp # to get all headers, including system headers
g++ -D NO_STL -I A_PATH -MM source.cpp # to get all headers, except system headers
g++ -D NO_STL -I A_PATH -H source.cpp # to print all headers nested in as a tree
To confess, this is the first time I write unit testing for Java & C++ and it turned out this was rather simple. This weekend Daniel is hosting hackathon at his place and invited Kong & me to come over. The first thing we did was TDD. Daniel was fluent in TDD & pair-programming as his company used Ruby and actively worshiped agile development.
I was given a task: To implement a set data structure in Java as simple as possible to pass the test cases that he would write to challenge the implementation. The first test case was to assert emptiness of a new set via the method `bool isEmpty`. Returning `true` was easy. Then Daniel wrote another task to add a new element and requested the set must return `false`. I added a count & array as private data members. Daniel said I should implement as simple as possible. So I used private one member _isEmpty and set it to `true` by default and `false` when invoking `add(int value)`.
And we went on to add test cases to cover methods `int count()`, `void remove(int value)`, `int getIndexOf(int value)`… I got 2 bugs and JUnit was able to point out which test cases failed and I was able to locate the bugs with ease.
The real value is seen when refactoring was involved in the process. TDD approach & the written test cases greatly aided me in verifying that code refactoring worked like the old code. When it comes to real life projects with hundeds of modules and millions of lines of code, refactoring without test coverage is like walking into a minefield. If you were diligent enough to write test cases to cover all functions and methods, then you can refactor at will and then run against the accumulated test cases with confidence.
However, one must be clear that test cases guarantee only the cases they cover. Passing the refactored code against the test cases doesn’t guarantee correct outcomes of untested cases. So, be diligent and creative enough to write quality test cases to cover all possible cases to exactly define the behavior of the functions & methods.
Lastly, TDD enforces no optimization. TDD requires you to write simplest possible code to pass all the test cases. Refactor along the way and pass the test cases. Profile when you need performance. That’s not to downplay the importance of throughout & mindful design of algorithms and data structures. One must find a good balance between TDD & efficient implementation that is crafted since the very beginning, not leaving refactoring to the very late stage.
The NginX server from the Ubuntu LTS repo was 1.1.x and did not support WebSockets. The minimum version of NginX that supports WebSockets is 1.3.13. Note that supports comes in the proxy module. I upgraded to 1.4.1 by compiling from source and the proxy module is enabled by default. Now you can taste WebSockets via NginX proxy at http://pico.simpleit.us/ . Only valid users can log in and fully experience it. If you want to test our new app, comment below.
I was cleaning up Apache config and removed the default 000-default & default-ssl from the Apache sites-enabled sub-folder. To my surprise the default page now goes to a web app that I did not expect. It took me a sec to realize 2 things:
The first VirtualHost config that has format *:80 (HTTP) or *:443 (HTTPS) was considered the default site
Apache loaded files in alphabetical order
So I prepended my blog config file with 000- and now blog is the default site across simpleit.us