Back to Top

Friday, October 30, 2009

How to generate a stackdump with GDB


4054760074_609af75332_o I’m not a big GDB guy, but Google always helps:

  • Create a textfile with the following content:
    set height 0
    thread apply all bt
  • Run the following command:
    gdb $EXE -pid $PID -command $TEXTFILE > $OUTPUTFILE
    • $EXE is the path to the executable
    • $PID is the PID it is running under
    • $TEXTFILE is the file where your've saved the previous commands
    • $OUTPUTFILE is the file where you would like your stackdump to be saved.

The cool little crawling logo was taken from HiR, head over there for an explanation.

The importance of false positives


2748438226_c0ed3e06f6_o An interesting paper was bought to my attention recently by this blog post: The Base Rate Fallacy and its implications for the difficulty of Intrusion Detection. The central question of this paper is: if we have a flow of N packets per day and our network IDS has a false-positive rate of X, what is the probability that we are experiencing a real attack, given that the IDS says that we are? The paper uses Bayes’ theorem (of which you can find a nice explanation here) to put some numbers in and to get horrifying results (many false alerts), and to conclude that such a rate of FPs seriously undermines the credibility of the system.

The issue of false positives is also a concern in the anti-malware industry. And while I rant quite a bit about the AV industry, you have to give this one to them: the number of false positives is really low. For example, in the AV-Comparatives test 20 false positives is considered many, even though the collection is over 1 500 000 samples (so the acceptable FP rate is below 0.0015%!). Update: David Harley was kind enough to correct me, because I was comparing apples (the number of malware samples) to oranges (the number of clean files falsely detected). So here is an updated calculation: the Bit9 Global File Registry has more than 6 billion files indexed (they index clean files). Consider whatever percent from that which is used by AV-Comparatives for FP testing (as David correctly pointed out, the cleanset size of AV-Comparatives is not public information – although I would be surprised if it was less than 1 TB). Some back-of-the-napkin calculations: lets say that AV-Comparatives has only one tenth of one percent of the 6 billion files, which would result in 600 000 files. Even so, 20 files out of 600 000 is just 0.003%.

Now there were (and will be) a couple of big f***-ups by different companies (like detecting files from Windows), but still, consumers have a very good reason to trust them. Compare this with more “chatty” solutions like software firewalls or – why not – the UAC. Any good security solution needs to have at least this level of FPs and much better detection. AV companies with low FP rates – we salute you!

PS. There might be an argument to be made that different false-positives should be weighted differently (for example depending on the popularity of the file) to emphasize the big problems (when out-of-control heuristics start detecting Windows components for example). That is a valid argument which can be analyzed, but the fact remains that FP rates of AV solutions, is very low!

Picture taken from wadem's photostream with permission.

Thursday, October 29, 2009

Fun videos


Got the links from a friend:

Why network neutrality is a big deal


Reposted from the packetlife blog. We already pay for the bandwidth. The content providers already pay for the bandwidth. Anyone claiming anything different is either very misinformed or is straight out lying!


Wednesday, October 28, 2009

Bohemian Bankruptcy


Via Naked Capitalism

RequestPolicy Firefox Plugin – the ultimate NoScript


3236129283_d61fb9c429_b I recently found out about the following Firefox plugin/addon: RequestPolicy (via this blogpost) – see also the Firefox addon page. Its function is to whitelist all kinds of cross-domain requests, including scripts, style-sheets, images, objects (Flash, Java, Silverlight), etc. Anything in a webpage hosted on the domain A can reference other content from domain A, but if it references content from other domains, it must be present in the RequestPolicy whitelist. There are three types on entries which can be added to the whitelist:

  • source (ie. pages on domain S can reference anything)
  • destination (ie. anything can reference domain D)
  • source-to-destination (ie. pages on domain S can reference resources on domain D)

There are still some glitches to work out, but all in all it is a good tool for the security conscious. So is it worth it? It depends. If you are not a power-user who has some knowledge HTML (ie. how CSS, HTML, JS and plugin objects fit together to form the page), I would recommend against it (because you will have the experience of webpages “not working for no good reason”). It takes some initial training (just like NoScript), but after that it is pretty invisible (even though not as invisible as NoScript, because it blocks images / style-sheets).


Does it make you more secure? Yes, but just in the “you don’t have to outrun the bear”: once the attacker has enough control to insert a linked resource (script, iframe, etc) in a page, s/he almost certainly has enough control to insert the attack script directly in the page, rather than linking to it. The current practice of linking to a centralized place is mostly because the attackers want to have centralized control (for example to add new exploits) and statistics. Would such a whitelisting solution to become widely used, they could switch with very little effort to the “insert everything into the page” model. Still, such a solution shouldn’t be underestimated, since it gives an almost perfect protection under the current conditions.

Update: If leaving digital trails is something you like to avoid, take into consideration that the fact that a given site is present in the whitelist of addons such as NoScript or RequestPolicy can be considered proof that you've visited the given site (unless it is on the default list of the respective addon). Just something to consider from a privacy standpoint. Life is a series of compromises and everyone has to decide for herself how to make them.

Picture taken from Luke Hoagland's photostream with permission.

help build the mozilla developer network


After asking you if you use Perl, now I’m asking you to help build the Mozilla Developer Network (MDC). They are running a survey to get to know their audience better. Please take it if you use the MDC and have a couple of minutes of free time.

PS. You can read some preliminary results from the Perl IDE poll here.

Monday, October 26, 2009

Taking apart the Dell Inspiron 9400


A word of caution: taking apart your laptop will void your warranty. Do this operation at your own risk. If you are not comfortable doing this operation, I would recommend against it. Disassembling a laptop is harder than taking apart a desktop computer (mostly because of the confined space), so you shouldn’t do it if you didn’t “look into” atleast couple of desktops already!

You can see a high resolution of the images below by clicking on them.

Step 0: what tools you need – a long Philips (“cross”) screwdirever, preferably one with magnetic tip (but you can manage without it).


Step 1: disconnect the antenna from the wireless card. This is important, since it is connected to the LCD panel, which we need to remove. Do this by pulling carefully upwards on the connectors (not the wire). Don’t worry about knowing which wire goes where when reassembling, since it is clearly marked (with small white / black arrows).


Step 2: tilt the screen all the way backwards (so that it is parallel with the bottom part) and remove the upper part of the cover. There is a small opening where the marking is on the image, you can start there. Carefully remove the whole cover. It has a couple of plastic “ears” which you have to be careful not to break.


Step 3: remove the battery, hard drive, optical drive and bluetooth adapter. You eject the battery by sliding the middle lever. Remove the hard-disk by removing the two screws marked at the right. You can also remove the bluetooth adapter, which is near the harddisk. Sidenote: except the screws from the harddrive, you can distinguish the screws from the lower part and the upper part by their length. The rule is: lower part – long screws, upper part – short screws. To remove the optical drive, first remove the screw marked by a lock, and then push on he metal part with the screwdriver. This should pop it out just enough that you can pull on it.


Step 4: remove the screws holding the screen and the two screws holding the keyboard.


Step 5: disconnect the CMOS battery (this will result in you loosing your BIOS settings, which you will have to reset at the first boot after assembly). Also, disconnect the keyboard. This is a tricky connector: you have to flip the upper part open to remove the cable. Also, when putting it back, you first have to make sure that you’ve properly aligned the cable with the connector, and then push down on it. If it doesn’t go easy, don’t force it, rather take it out and try again, making sure that the alignment is correct (straight).


Step 6: disconnect the LCD panel and remove it. Unscrew the upper part, in the locations marked with “P”. Disconnect the two cables linking it to the mainboard (the ones towards the middle). Flip the base over and remove the bottom screws also. At this point you can separate the upper and lower part of the base.


Step 7: You can remove the PCMCIA adapter.


Step 8: The laptop is almost completely unassembled at this point. You can continue removing parts if you need to, however take care when working around the coolers: tightening them too much can result in the CPU/GPU cracking. Make them too loose however, and your cooling will suffer.


Happy hacking!

Friday, October 23, 2009

Watch out for those reviews…


154117109_1aee1dcb5b_o Recently I was buying a notebook HDD, and after considering a Samsung SpinPoint model, I’ve looked around the net to see if there were any known issues with the model. So I stumbled upon this page and my blood ran cold. Quote:

One of the most common problems Samsung SpinPoint hard drives experience is burnt cuircuit board(PCB).


Samsung hard drives could also suffer from firmware problems.


Another quite common symptom Samsung drives experience is clicking/knocking sound.


There is one more problem that is typical for all hard drives and Samsung drives particularly: bad sectors.

Is this drive really of such poor quality? Does it really have all these problems? But then I started looking around on their site at they seem to have the same or very similar text for every type of HDD out there. The conclusion: they (Data Cent) are just trying to spam Google and I’m inclined to believe that most of their advice isn’t founded on facts, but rather on a randomized text generator. I for one encourage people not to take their business to such a company.

PS. All the links to them are nofollow, so I’m not giving them any Google love.

Picture taken from barnoid's photostream with permission.

And now for some upbeat news


While I certainly like to rant, one shouldn’t forget about the more sunny side of life (unless you want to go berserk). So here are some random positive things:

Some songs which I like:

A funny image from a friend:


A couple of great freeware programs for the Windows platform:

  • CDBurnerXP – does everything Nero does, for free!
  • DVD Flick – while from the technical standpoint it is “just a wrapper” over FFMpeg and similar tools, it does a great job – you can create your DVD in a couple of steps
  • foobar2000 – a great little MP3 player, especially for those of us who liked the old Winamp, before it tried to do everything. At it can also do batch transcoding!
  • IrfanViewthe free image viewer / converter!
  • 7-zip – open source WinRar. Supports a lot of formats
  • Far Manager Open Source – a great native win32 file manager with a retro look
  • BB FlashBack Express – a free screen capture software which works great
  • VideoLan (or VLC as it is better know) – the simple solution to play all your media, without having to install tones of codecs. If it would have a little better playlist management, I would use it as my primary media-player

A great quote: “the difference between communism and capitalism is that in first men exploit other men, and in the second it is the other way around”. Found it via this New York Times blog, written by the authors of Freakonomics (and now the sequel '>Superfreakonomics). It is a great blog, worth the read. Where elsewhere do you find a rigorous analysis of the logic in newspaper comics?

So there you have it, have a great day! An maybe listen to some french striptease songs :-) (just a little SEO for a friend ;-))

Thursday, October 22, 2009

The difference between additive and subtractive color schemes


I’ve known for the longest time that there are two ways of creating / describing colors: additive (RGB) and subtractive (CMYK). However I never really understood the equivalence between them, until recently, when I picked up a book which presented the general concepts of typography. This is very cool, so I’ll try to document it here (because I didn’t find it elsewhere on the ‘net – although it almost certainly exists).

Lets begin with the additive part:


This is done by focusing one to three of the base colors (Red, Green and Blue) in the same point to create a given color. (Technically it isn’t “the same point”, just three points in such close proximity that you can’t tell the difference). The base colors can be generated in different ways: an electron beam hitting differently colored phosphor particles (CRT), colored LEDs or filtering out colors from a white light with colored polarization filters (LCDs). Already we can see a similarity between the “additive” and “subtractive” models: even when using the additive model, we sometimes create the light using subtraction (in the case of LCDs).

Now for the subtractive part:


Here we have a light source emitting white (ie. containing all three of the RGB components) beams of light. These beams hit a surface and some of the components get absorbed (“subtracted” from the beam), while others get reflected. The ones reflected form the resulting color. From this point of view the subtractive model is analogous to the way LCDs create colors. The three colors used in the subtractive model (Cyan, Magenta and Yellow) are chosen because they absorb the different components (ie R/G/B) of the white light.

There are other technical details (for example: the presence/absence of each component is not a binary 0/1 number, rather a continuously varying one, resulting in an infinite number or colors. or an other: the absorption in the case of the CMYK model is not perfect nor uniform between the different components. also, absorbing all the light – ie. creating black – is theoretically possible by combining CMY, but it would result in a – relatively speaking - very thick layer of paint, which is why black paint is also used in the printing process), but the essence of it is: additive and subtractive models are very similar.

Hope this helps somebody :-)

If you use Perl…


807311296_c136909bc7_oPlease take a minute an answer the poll “Which editor(s) or IDE(s) are you using for Perl development?“ (via perlbuzz and szabgab).

Picture taken from Knightrider's photostream with permission.



Comodo Instant Malware Analysis

Posted: 30 Sep 2009 04:12 AM PDT

Gomorra (2008)

Posted: 30 Sep 2009 11:10 PM PDT

A Unified theory of Superman's Powers

Posted: 07 Oct 2009 07:29 AM PDT


This posting includes an audio/video/photo media file: Download Now

myNetWatchman - Network Intrusion Detection and Reporting

Posted: 11 Oct 2009 01:22 AM PDT

Laffer curve - Wikipedia, the free encyclopedia

Posted: 12 Oct 2009 05:05 AM PDT

Via "On this curve, developed by the brilliant economist Arthur Laffer, as the tax rate increases, the amount of revenue increases, but at an increasingly slower rate than the tax rate, due to increased avoidance, evasion, and most of all disincentive to engage in the taxed activity."

BashPitfalls - Greg's Wiki

Posted: 12 Oct 2009 04:45 AM PDT

Very useful. Every linux (power-)user should read this. Via

Privax - Protecting Your Online Privacy

Posted: 12 Oct 2009 04:07 AM PDT

Security Conference - SecTor 2008

Posted: 12 Oct 2009 03:16 AM PDT

Windd 1.3 Final! (x86 and x64) - Matthieu Suiche’s blog !

Posted: 12 Oct 2009 02:47 AM PDT

Liked this new feature: Can generate a Blue Screen of the Death :-)

When Noise is Your Friend: Smoothed Analysis - A Computer Scientist in a Business School

Posted: 13 Oct 2009 08:23 AM PDT

Greg Smith's Note Magnet: Watching a hard drive die

Posted: 13 Oct 2009 08:20 AM PDT

Be lucky - it's an easy skill to learn - Telegraph

Posted: 13 Oct 2009 03:41 AM PDT

Find & Replace across multiple files in linux

Posted: 14 Oct 2009 08:24 AM PDT

Likewise Open Source Software that Authenticates Linux, Unix, and Mac ...

Posted: 17 Oct 2009 11:40 PM PDT

Unattended, A Windows deployment system: Welcome

Posted: 17 Oct 2009 11:39 PM PDT

Sammon Projection

Posted: 19 Oct 2009 10:50 PM PDT

A couple of new challenges


Here are a couple of challenges I found on the interwebs:

  • SSHliders – from This one is centered around *nix shell scripting and more advanced topics like pipes.
  • Hugi Size Coding Compo #29 (from Hugi) – not much time left there, the deadline is the 28th of October. No flashy prizes either, just the bragging rights that you’ve created something useful in less than 124 bytes (the current leader)

Also, the solution to the Prison Break EH challenge has been posted (on with additional videos on the radajo blog). There is a lot of cool networking info in there, worth the read!

Have fun!

Wednesday, October 21, 2009

Using TarInputStream from Java


994941366_af693049f1_o Recently I had to parse trough a bunch of logs, scattered in subdirectories and different types of archives (tar, bz and gz). My first thought was of course Perl (since it is the language for parsing quasi-freeform text), however I didn’t have “streaming” implementation for the archive modules, which in my case was very important, since the archives were big and reading them completely into memory was not acceptable. So I fund the TarInputStream / CBZip2InputStream from Apache Ant and GZIPInputStream, which is readily available in the JRE. While the last two are quite straight-forward to use, I’ve had to beat my head against the wall for quite some time before I managed to use TarInputStream. To save other people the hassle, here is a short writeup on what I’ve learned:

  • After creating the TarInputStream, you start out by calling getNextEntry.
  • You do this until it returns null (similar to how you would read a textfile line-by-line with BufferedReader)
  • tar doesn’t actually compress anything, it just concatenates the data in a sequence of <header> <data> series. After calling getNextEntry the TarInputStream is positioned right at the start of the data for the given entry (if it is a file, which you should also check)
  • To read the data associated with the TarEntry you just obtained, you have two possibilities:
    • You can use the the copyEntryContents method on the stream to put the data in an other stream (in memory, in an other file, etc). Just make sure that you have enough memory / disk space to do so
    • You can read the contents directly from the stream. For example you can layer a GZIPInputStream (or CBZip2InputStream) over the TarInputStream if you have a gz / bz2 in a tar (usually it’s the reverse, this was the case for my little parser for example)

One thing to watch out for if you choose the second method, is the fact that TarInputStream is very sensible to positioning. So if the stream you layer on top of it has a off-by-one error (ie. it reads a couple of bytes more than the actual size of the data), you can quickly get a mysterious IOException which says something along the lines of “reading from output buffer”.

My solution to the problem was to layer a custom FilterStream on top of TarInputStream before handing it over to an other stream which does two things:

  1. it makes sure that the stream on top of it can read only N bytes, where N is the size of the entry and
  2. when close is called on it, it doesn’t propagate it to the TarInputStream (so that it doesn’t get closed and further entries can be processed)

Below you can see this filter stream:

public class SizeLimiterInputStream extends FilterInputStream {
  final long maxSize;
  final InputStream base;
  long alreadyRead;
  public SizeLimitInputStream(InputStream in, long maxSize) {
    this.maxSize = maxSize;
    alreadyRead = 0;
    base = in;
  public synchronized int available() throws IOException {		
    long a = base.available();
    if (alreadyRead + a > maxSize)
      a = maxSize - alreadyRead;
    return (int)a;			
  public void close() {
    // do nothing
  public boolean markSupported() {
    return false;
  public void mark(int readlimit) {
    // do nothing
  public void reset() throws IOException {
    // do nothing 
  public synchronized int read() throws IOException {
    if (alreadyRead >= maxSize)
      throw new EOFException();
    int r =;
    alreadyRead += 1;
    return r;
  public synchronized int read(byte[] b) throws IOException {
    return read(b, 0, b.length);
  public synchronized int read(byte[] b, int off, int len) throws IOException {
    if (alreadyRead >= maxSize)
      return -1;
    if (alreadyRead + len > maxSize)
      len = (int)(maxSize - alreadyRead);
    int r =, off, len);
    alreadyRead += r;
    return r;
  public synchronized long skip(long n) throws IOException {
    if (n < 0)
      return 0;
    if (alreadyRead >= maxSize)
      return 0;
    if (alreadyRead + n > maxSize)
      n = maxSize - alreadyRead;
    long r = base.skip(n);
    alreadyRead += r;
    return r;

So your code would like something along the lines of:

TarInputStream tis = new TarInputStream(fileInputStream);
TarEntry tarEntry;
while ((tarEntry = tis.getNextEntry()) != null) {
  if (tarEntry.isDirectory()) continue;

  InputStream tmpIn = new SizeLimitInputStream(tis, tarEntry.getSize());				
  // process tmpIn - create other streams on top of it for example ...

Hope this helps.

Picture taken from quapan's photostream with permission.

Tuesday, October 13, 2009

Avalonix Wireless Camera review


A wireless security camera is quite an interesting piece of technical equipment, which – the conventional wisdom holds – can deter people from breaking the rules (whatever those might be) or help after the fact demonstrate “who done it”. Their real value is however questionable. For one, they can become a prime target for attackers (“the first to fall”). Second of all, regardless of what is shown in TV shows, the resolution is quite poor, so it is improbable that you would get something out of it which could be used to identify a person with any certainty. Finally, there is a big problem with unauthorized people getting access to the pictures (voyeurism), especially with wireless security cameras, and especially with Internet connected wireless security cameras.

If you have carefully considered all the possible drawbacks and still want to install a wireless security camera (or more), you might want to take a look at the Avalonix 5.8GHz Wireless cameras. They advertise a reception distance between 3000ft (around one Km) and 7 miles (around 11 Km) with a proper antenna. They have a separate night vision mode and 420 lines of resolution. Again, consider if this is enough for your purpose. 420 lines is lower than the resolution of a standard-definition TV (which is 480). Two other concerns are the fact that the only provider for Avalonix cameras ( seems to be offline currently (even though the server responds to pings, the webpage doesn’t load – tried from two geographically distant networks - and the Google cache shows a lot of MySQL errors) and the only mention I could find contains a complaint. The technical specification also fails to clarify if this is an outdoor or indoor camera.

On the bright side of this wireless security camera, the domain seems to be rather old (stable) and its owner doesn’t hide behind a proxy registration service (off topic: I too agree with the opinion that only private individuals should by allowed to use the domain-by-proxy type services, and that businesses should be needed to provide real verifiable contact information).

Currently my advice would be to go for a different type of camera, one which is more widely available and for which more reviews exists. Again, consider that such equipment might be a prime target for vandals, so you might be better off installing it in a covert location, and eventually providing some cheap fake camera lookalikes. Also, consider using a high resolution camera. One good test is to take a digital camera and shoot a couple of photos from the angle where you are considering to mount the camera (include some people in the photos near the perimeter of the view field). Then take the photo (which is probably of a high resolution 5+ megapixel) and shrink it down to whatever the camera specification says. See if you can still see enough details of the people in the picture (and consider that scaling down preserves more information that just plain-out taking the picture at the smaller resolution – so if you are unable to distinguish faces, you won’t be able to with the camera).

Full disclosure: this is a paid review from ReviewMe. Under the terms of the understanding I was not obligated to skew my viewpoint in any way (ie. only post positive facts).

Friday, October 09, 2009



2454524589_b86461507f_bDisclaimer: I received no compensation for this review. All the opinions are my own.

There are a couple of actions you can do with music you have on your computer:

  • You can listen to an arbitrary song from your library
  • You can jump in the song you are listening to
  • You can share it with a friend
  • You can convert it and listen using an alternative format (ie. on your MP3 player, on your phone, in your car, etc)
  • You can edit it using a sound editor

Thus, any service which aims to replace your music library should provide as many of these options as possible. Recently I found out about Grooveshark which provides all but the last two of these actions for free! Additionally they have a very slick interface and you don’t even have to sign up to use the service!


Additionally they provide an “auto-play” feature which adds “similar” songs to your playlist and seems to work acceptably (is is no Pandora though). An other thing I appreciate is the clear revenue model: it is ad supported or you can become a premium member for as little as 3$ / month and see no ads. I don’t know if the numbers are right, but they seem much more plausible than the one of Jango. I don’t want to dish them, but their lack of business model (no premium accounts!?) always concerned me and recently they seem to experiment with a lot of things (like taking away the possibility of directly playing a song and rather linking to Rhapsody) which is both annoying and concerning. My only concern is the fact that I found this site via the now defunct Seeqpod, which used (intentionally or unintentionally) publicly available MP3 files to power their service. While Grooveshark seems to use their own servers, rather than arbitrary hosts on the Internet, the file name quality suggests that these files were arbitrarily downloaded from different sources (public folders? P2P? – who knows), a practice which might get them in hot water with the RIAA, especially given that they seem to be based in the USA.

In conclusion: Grooveshark is great for now. Hopefully it will last in its current (or better) form for a long time. Below you can see an example of the embedded player:

Update: check out the new Grooveshark interface! It is very nice. Also, I'm a VIP subscriber to Grooveshark now!

Picture taken from gonzalovalenzuela's photostream with permission.

Sony Ericsson Satio


I admit: I didn’t jump on the “hip new phone with all kinds of bells and whistles” bandwagon yet. I’m very content with my small, sturdy Nokia. However one has to admit that there is a certain kind of wow effect when one looks at phones like the Sony Ericsson Satio. Even though I had some bad experience with Sony Ericsson phones in the past (in my opinion they are the “looks shiny but don’t expect it to last more than a year” category), this phone looks nice.

It has all the features one would expect from such a phone (quad-band, Wi-Fi, Bluetooth, GPS, USB, etc). Of course it has a full QWERTY keyboard with touch-screen and uses a standard stereo jack (this is big advantage and seems to be a standard feature for Sony Ericsson products). However the price seems to be a little too steep. I still can’t see real return-on-investment in buying a phone from this category. You could argue that it brings the functionality of a netbook, a digital camera, a media player and phone in one, but I still feel that it doesn’t do at least an average job of all of them (I imagine that it is a little too small for a netbook replacement and it doesn’t have optical zoom on the photography side – of course no consumer phone does – which makes the quality of the photos too low, even for an amateur like myself).

Full disclosure: this is a paid review from ReviewMe. Under the terms of the understanding I was not obligated to skew my viewpoint in any way (ie. only post positive facts).

LandAirSea Systems - GPS Tracking


landairsealogo GPS Tracking is a contentious issue, even more when it is forced on the subject. There are however situations where we can be legally obligated to subject ourselves to such tracking (for example when we are using the car of the employer or if our legal guardian wishes so). The LandAirSea company specializes in such tracking systems. They provide a variety of models, some with real-time tracking capabilities, others requiring the physical re-acquisition of the equipment for later analysis. For example we have the tracking key (available also on Amazon – disclosure: the link contains my affiliate ID), a very small form-factor solution with mostly positive reviews.

The company has been around since 1994, so, as the old cliché says, they must be doing something right. One positive aspect I observed is that tracking seems to be free for many models, there are no additional recurring costs besides the purchasing of the GPS. On the downside (if this is a concern for you), the real-time tracking seems to be limited to the mainland of USA. In conclusion: good company, nice offer and it is worth considering if you are shopping around for such equipments. One thing to keep in mind is that GPS units can be quite slow to acquire the signal in my experience (up to a couple of minutes if it isn’t moving, and even more if the unit is in motion).

Full disclosure: this is a paid review from ReviewMe. Under the terms of the understanding I was not obligated to skew my viewpoint in any way (ie. only post positive facts).

Wednesday, October 07, 2009

Fixing CVS annotate


3415325123_d6e1435b48_b Yes, some of us work on projects started almost a decade ago and as such we use CVS (yes, CVS has many limitations and yes, git is better – for a nice introduction see Randal Schwarz’s video about git), but migrating is not directly justifiable (it would involve: training IT staff to be able to maintain the repo, rewriting automation code which relies on CVS and training programmers – even though some of these could be postponed given that git contains a CVS bridge). Anyway, the problem which I faced was the following: cvs annotate only displays the first 8 characters of the username, which can be ambiguous if multiple people have similar usernames (which can easily happen if there is a convention like name.surname). Here is my solution to the problem: fetch the log for the file get the user associated whit each version (in the log CVS includes the full usernames). Then fetch the annotated version of the file and use the version to disambiguate the user. Here is some Perl code:

sub processAnnotations {
  my $fileName = shift;
  my ($cmdLine, $pid, %revisions);

  $cmdLine = "cvs -z9 log -N '$fileName'";
  $pid = open F, "$cmdLine |";
  my $rev;
  while () {
    $rev = $1 if (/^revision ([0-9\.]+)$/);
    $revisions{$rev} = $1 if (/^date:.*?author: (.*?);/);
  close F;
  waitpid($pid, 0);

  $cmdLine = "cvs -z9 annotate '$fileName'";
  $pid = open F, "$cmdLine |";
  my @annFileLines;
  while () {
    if (/^(\d[0-9\.]+)(\s+)\(\S+ (.*)/s && exists $revisions{$1}) {
      $_ = "$1$2(" . $revisions{$1} . " $3";
    push @annFileLines, $_;
  close F; 
  waitpid($pid, 0);    
  return join('', @annFileLines);

PS. I verified in the CVS source that the output width for the author field is hardcoded:

		    sprintf (buf, "%-12s (%-8.8s ",

Picture taken from Valeriana Solaris' photostream with permission.

CBT Planet - PR Newswire


cbt planet recently announced their MCITP bootcamps for 2010. MCITP stands for Microsoft Certified IT Professional, and in good Microsoft tradition, it doesn’t stand for one certification, but rather a group of certifications. So we have (list taken from the Microsoft site):

  • MCITP: Enterprise Desktop Administrator 7
  • MCITP: Consumer Support Technician
  • MCITP: Enterprise Support Technician
  • MCITP: Enterprise Administrator
  • MCITP: Server Administrator
  • MCITP: Database Administrator 2008
  • MCITP: Database Developer 2008
  • MCITP: Business Intelligence Developer 2008
  • MCITP: Database Administrator
  • MCITP: Database Developer
  • MCITP: Business Intelligence Developer
  • MCITP: Enterprise Project Management with Microsoft Office Project Server 2007
  • MCITP: Enterprise Messaging Administrator

Again, as with other Microsoft exams, each certification is composed out of two to three exams. There is absolutely no overlap between the exams necessary for the different types of certifications, which is somewhat surprising to me (I’ve briefly looked at the MS certs for developers and I seem to remember that there is more overlap there – but it is possible that this is representative for certification structure for the IT part of MS).

From what I’ve seen on their site, CBT Planet focuses mainly on providing online, self-paced study possibilities, but it seems that they are trying to enter into the bootcamp market. As a general rule, I don’t see the value in bootcamps. The two exceptions would be: if you know that it is absolutely necessary for the job you are applying for or if your employer is paying for it (in which case it can be considered an almost – but – not – quite – vacation). From my experience (based on participations in a couple of certification programs – no bootcamps though), anyone somewhat familiar with the subject can pass the final exams. If you go to the bootcamp: good luck with the exam!

Full disclosure: this is a paid review from ReviewMe. Under the terms of the understanding I was not obligated to skew my viewpoint in any way (ie. only post positive facts).

cbt planet


The issue of certifications is a contentious one in the field of IT. On one hand there are the people who try to hire without having necessarily the expertise to judge the candidate (ie. HR). On the other hand there are the people who complain about the low quality of “certified” people.

The best attitude I know about towards certification is the following (unfortunately I can’t recall where I originally read it): recognize that there are multiple type of job roles. If you are looking for someone axed exactly on one technology (ie. managing Solaris servers), a certification can be a useful indicator. However if you are looking for someone whose responsibilities are not so clear-cut (ie. Java developer, but also should know some Linux, Bash scripting and Perl if possible), you are much better of searching for people with certain personality traits than with these exact certifications. Again, something I’ve read and I agree wholeheartedly with: “recruiters don’t invent candidates – they find existing ones, if the exists”.

An other aspect of certificates is cost – both monetary and time. Because of this, it is usually a nice reward companies can give their developers.

Having said that, let me write a little about the MCITP (Microsoft Certified IT Professional) bootcamps announced by cbt planet. From what I’ve seen, CBT Planet focuses on providing on-line (ie. video) training for certifications. This is an easy and time-effective way to gather the information you need to pass the exam. The bootcamp is simply a condensed version of that.

The structure of the Microsoft certifications is as confusing as ever: as with many others, there is no “one” MCITP, rather there are many flavors. So, should you get it? It is a matter of personal preference. My personal opinion would be: if it is offered by the company, definitely. An other definitive yes is the case you know for sure that it is a requirement for a future job. Otherwise – not so much.

Full disclosure: this is a paid review from ReviewMe. Under the terms of the understanding I was not obligated to skew my viewpoint in any way (ie. only post positive facts).

One more thing…


214291449_b0d7e78356_b So, if I started ranting on Microsoft, here is one more thing: you should never ever use Microsoft servers if you want to scale. The reasons is simple: currently the best scaling method is horizontal (ie. buy loads of cheap hardware and distribute the load between them). Using Microsoft server software would mean that for each server you would need to buy at least one license (or more, like in the case of MS Windows + MS SQL Server). This can easily cost you almost as much (if not more) than the hardware itself.

As far as I’m aware, only Microsoft and hosting providers (which recoup the cost from their clients) are running MS servers in large numbers. This is sustained by a quick check of the top ~5000 sites (as defined by Alexa) – less than 16% of them run IIS. To paraphrase Eminem:

Be smart, don’t be a retard! You take advice from someone who run their front-end webserver tier on VM’s, even though it makes no sense from a technology standpoint to do so?

PS. Yeah, there is the BizPark program from MS for startups – but we yet have to see the first large-scale success emerge from it (BTW, one of my favorite site – StackOverflow – runs Windows Server trough the BizPark program – and even they now use a Linux VM!).

Picture taken from mark sebastian's photostream with permission.

Monday, October 05, 2009

My opinion about Microsoft, software piracy and everything


This post is a response to a blogpost on tudor g’s blog about software piracy issues in Romania, and as such it might not be of interest to you, dear international reader. If this is the case, feel free to skip this post.

Disclaimer: arguments are very emotional things. As much as we would like to think that they consists of logical statements and counter statements with the “best” arguments winning, in real life the acceptance or rejection of a given argument very much depends on the frame of reference of individual persons. With this in mind, I believe that Tudor and me have very different frames of reference (him being a Microsoft employee and me being an open-source enthusiast) and as such I’m quite sure that nothing written by me here will change his mind (and conversely, nothing written by him will change my mind). Still, I think that this is a useful exercise to get things off my chest and to document arguments for more open-minded people :-)

  1. His first argument is that installing and using pirated software is harder that legitimate software because mechanisms like WGA - I don’t think that this argument holds water, since most people (the “average user”) can’t install an OS, regardless if it is pirated or legitimate. Just ask yourself the following question: how many of the non-technical people you know installed themselves the OS on their computer? I would bet you that the number is very, very close to 0.

    If the OS is installed by the “neighbor kid from the 2nd floor”, then this argument doesn’t hold. Even more, many geeks pride themselves with being able to perform complicated tasks, like disabling the WGA, and as such, for them the existence of protection element is a positive thing (a challenge to be solved).

    Finally, the inverse of the argument (that legitimate MS software is easier to use than pirated one) isn’t true either in my experience. I had numerous occasions where (completely legitimate, bought with the computer OEM) Windows failed to validate, a (again, completely legit, boxed version) Win 2k3 SBS suddenly refused to work because it needed to be a DC (and it told me after 3 months!), the Windows 7 beta deactivated itself periodically, VM’s deactivated themselves after moving from one machine to the other for purely technical reasons (even though the one-machine / one-owner / one-copy rule was always observed), etc.

  2. The second argument is that there is no peer-pressure to pirate in Romania (that not “everybody is doing it”) – I would suggest him to visit any campus in Romania and check out the (pirated) software which can be found on the network. And not only that, but music, movies, books, etc. Or to go to repair shops and ask for MS Windows / MS Office being preinstalled on the computer – the answer will almost always be positive. Even more, the next generation feels entitled to these freebies (and it isn’t something specific to Romania either, thanks to the abundance of the freemium business model on the Internet).

    In the long term (IMHO) less and less people will be willing to pay for things which they perceive as basic needs. The only options for old-style software companies (like Microsoft) are to include more and more technical measures to try to prevent this (even though the current measures already make MS Windows annoying to use) or to raise the level of punishment associated with piracy (which shouldn’t be possible in democratic countries because of public backlash)

  3. Piracy doesn’t help the software companies by making their product more well known – if this would to be true, why do you think that there are associations in people’s mind like Microsoft – Windows, Office – Microsoft, image editing – Photoshop, CD burning – Nero and so on? Most people use whatever is already installed on the computer to accomplish their jobs. This is why OEMs get big bucks from software companies to preinstall their product.

    I don't buy the “starving programmer” argument either. The cost of copying software is minuscule. Which means that over 80% (this is of course a number pulled out of my rear, but I’m quite sure that the real number is somewhere in the ballpark) of each sale is pure profit. Which means that (a) a freemium type model can easily be sustained and (b) that even a few sales mean that the company makes a nice profit, and excessive focus on this part (ie. “we are loosing X billion of USD to piracy” – which BTW is not true for at least two reasons – first because the method to determine the number of pirated copies is questionable and second, because it assumes that every person who “pirates” would buy the product if s/he didn’t have access to the pirated version) is pure greed – which, let us remember, is one of the seven deadly sins.

    Also, as a programmer, you don’t have to write commodity software. Let me tell you, there is very good money to be made from writing custom software for a small number of clients.

    A final point I would like to make with regards the relation of piracy to innovation: remember that all three “big” powers (USA, Russia and China) started out (and some are still) by rejecting patents to bootstrap their industry. Something worth thinking about...

  4. That people buy because somehow they are convinced that it is “the right thing”, not because of fear – I’m not seeing it. At least at the individual level I don’t know anybody who bought a single Microsoft product (including myself, I’m living off my MSDN AA licenses). At a company level the motivation (arguably) is mostly fear. They buy licenses for the same reason they pay taxes. Also (as I’ve already said at point 2), the willingness of people will only go down, not up.

  5. Software is not overpriced, especially when considering the income level of Romanians – this is IMHO the best example for the “pink sunglasses” Tudor is wearing and how his frame of reference distorts his perception. The average (net) income per month in Romania for 2008 was somewhere around 400 USD (this would mean 4800 USD per year for those of you who use this frame of reference). Given this figure, is it reasonable to expect that people give more than half of their monthly income (or even all it, if we consider that a computer would need MS Windows + MS Office + AV) for software? May I venture a guess that (a) Tudor has at least five time the average net income and (b) he has free access to all of Microsoft’s software, and as such, he might not see the real situation? My challenge to Tudor would be: how much of the software he has right now on his personal machine did he pay for?

My conclusion is that software piracy is here to stay. Especially in more poor countries like ours. To give Microsoft what is due: they do really excellent software (not that they don’t do mistakes, like Vista – which is abysmal – I’m speaking from a first hand experience, having played with it on two “Vista certified” laptops and in VMs). Even so, their expectation is unrealistic at a minimum and even unethical. Also, as a developer, if you develop using a technology (OS, libraries, components, etc) for which you don’t have the source code, you will hit “undebuggable” issues sooner or later.

PS. Vista is the new ME – just worse:

Update: fixed some typos and errors in expression – thank you to my dear readers.

Space Invaders Rulz!



Found this in the mall. While the quality is somewhat poor and I’m not quite sure that the proper license fees were paid for the images (“raiders invasion” ???), it is still fun.

Friday, October 02, 2009

Two new challenges


Well, new for me at least...

The first one is Just go to the site and you can start directly. As far as I know, this is not time-bound.

The second one is (“break this code”). It is put up by BitDefender and I don’t know if it has a time limit. The levels I’ve seen seem to focus on C/C++. It is available in both Romanian and English.

Finally, a little off-topic, but still a challenge: The Science Knowledge Quiz – with the tagline “Are you more science-savvy than the average American?”. Via Pat’s Daily Grind (I’ve got 11 out of the 12).

Have fun!

VoIPstreet Affiliate Program review


Branded affiliate programs are a great way to offer services to your clients which are not your core competency (for example you are web-design company but you would also like to offer hosting to your clients so that they have a one-stop-solution experience). This is what VoIPstreet Affiliate Program does for VoIP: you can resell a service to your customers which is fully branded with your logo. The service provides unlimited calls in the USA and all the features you would expect from a PBX (like caller-id, transfers, on-hold music, etc). Other information you might be interested in are the prices and the available phone models. One thing which I couldn’t find are the financial details of the affiliate relation.

Are there any good? From searching around the interwebs I found some old (2007) complaints, but it seems that it was a problem with their data-center and I couldn’t find newer complaints (which is a good sign). They also take part in industry events like IT Expo, whish means that they are definitely not a fly-by-night operation (one has to be careful, since the VoIP space has a lot of fraudsters who use cracked PBXs to offer cheap services).

Full disclosure: this is a paid review from ReviewMe. Under the terms of the understanding I was not obligated to skew my viewpoint in any way (ie. only post positive facts).

Thursday, October 01, 2009

SMOG button removed!



Almost a year ago I added a SMOG button to each blogpost, which (in a more or less serious manner) evaluated the “reading level” needed to understand the blogpost. However, today the site used for this service came up with a warning from Google saying that it might be malicious. I’ve looked into it, and indeed, it contains an IFRAME pointing towards a malicious site.

So I’ve taken down the script until this issue is resolved to protect people. Hopefully this issue will quickly be resolved.

Picture taken from riNux's photostream with permission.