2009-12-30

The Software Development Mismatch Problem

How often do you think that you have not been well prepared in school for the real life?

To me it often happens that I find particular realms where I did not learn the really important things in several subjects. However, for my career as a software developer I feel that I had a quite good start. Later, when attending university I experienced more difference in what I was taught and what I found useful. That said, I must admit that I am a practically oriented person. I don't deal a lot with theories although I know that it can be useful sometimes.

However, I thought several times that there is a mismatch problem (see Malcolm Gladwell about the mismatch problem) in the whole software development branch.

Recently I followed two podcast episodes that confirm my doubts that in a lot of cases software development results do not fit customer needs or economical concerns. I found them on se-radio.net: "Episode 149: Difference between Software Engineering and Computer Science with Chuck Connell" and "Episode 150: Software Craftsmanship with Bob Martin".

I agree with most of the content - for instance about the fact that computer science is way different from software engineering - see also the article about Software Engineering <> Computer Science from the show notes of the podcast - or more articles about "Beautiful Software".

In my opinion (also):
  • A lot of time is wasted later, when the software architecture is bad and the code is not kept clean.
  • The developer needs to choose wisely and master his primary tools - the programming language(s) of choice and the IDE.
  • Focus must be on the code and not on the usage of some tools or too many documents.
I also find it very useful where Chuck Connell draws the separation line between Software Engineering and Computer Science (written in free words from my understanding): When people are involved in a crucial manner (compare a custom development project of a desktop or web application with developing or analyzing an encryption algorithm) then you are on the software engineering side.

What I can see from all this is as a personal conclusion:
  1. It is extremely important to think well and communicate well with the customer before creating the architecture of the software.
  2. Don't forget about the importance of creating clean readable and well documented code (code comments are more important than other documentation that is created along with your implementation as that tends to be more outdated).
  3. Experience is an important factor. A well experienced developer is worth a lot!
Related posts: About agile software development, The good, the bad and the ugly.

2009-12-20

Going Linux

It now has been more then four months ago that I have decided to completely switch to Linux (not only at home where I use Linux since already about 2006). And it is about three months now that I am running completely Linux (Ubuntu in my case) on my working place. This has been a little more a challenge than just using Linux at home because for work I have several more requirements than "just" doing E-Mail, Web-Surfing, writing some letters, scanning, storing and printing my photos.

One thing is that I need more applications like for Mind Mapping, Taking screenshots (with annotations), creating videos (showing bugs or how to configure particular software pieces), creating virtual machines (for Windows legacy development and testing software on different environments), software development, writing concepts, Investigation of files (diff viewing and hex editing), remote support and more.

A second thing is to integrate with the environment in the office and remote access to customers who use partially VPN and partially other applications like TeamViewer (for which unfortunately no Linux version is available yet).

I started with Ubuntu 9.04 Jaunty Jackalope and upgraded to 9.10 Karmic Koala right after release (so running that for nearly two months now). My notebook is a Dell Latitude E5500.

The installation and first impressions were much better than compared to Windows 7 for example (where already the first use of IE or first software updates were full of annoyances). Further, a lot of important things were already there. However, I am used to be not like a standard user and there have been several things I want to have different from a default installation...

Additional applications:
Many additional applications I wanted to have were just a few clicks away using the synaptic package manager (this is one of the first additional installs I do recommend). This is far more efficient than on Windows where you have to go first to Google and then to the appropriate websites searching for the place to download additional core tools (like a better zip tool). So to get up-and-running with the most important apps is much faster on Linux (or at least on Ubuntu in my case).

And last but not least, your software downloads are done from trusted sources automatically having all packages pgp-signed (a thing that a "normal" Windows user is neither thinking about and installing all kind of stuff from whatever source).

Regarding the applications the biggest issue is to know your options and to choose an application that fits your needs. For many needs there are at least two options (often more). But there are some realms where it is difficult to find either one single program that fits. For example I searched a long time to find a screenshot tool that fits my needs. I need to do a lot of screenshots with annotations for documentation or as part of user support. So a very efficient tool is essential for me. Finally I found Shutter fitting my needs. It was a longer search because it is not in the standard repositories - to get it you need to add an additional repository from Shutter-project.org. - So although many applications are already in the repositories, for some more special applications you might need to add other optional repositories. But once added the repositories, the applications are seamless integrated in your package management (and hence updated along with all other updates).

There is a rest of tools that you might want to manually install and update because you are contributing by testing or you need the really newest stuff. In my case NetBeans is an application, that is currently developed really fast and since I am on Ubuntu I already did the second upgrade - so I am on 6.8 now and the distribution version would be "still" 6.7.1. - For NetBeans I do it because I want to decide separately when to switch to a new version.


Reliability and stability:
I can work now the whole day (8 and many more hours) without noticing the system getting slower and slower. Even Firefox runs faster and more smoothly under Linux (I use quite the same plugin set as under Windows). Some software I do download/install does not run well but this happens usually just when I installed some older not well maintained software that does not seem to be widely used. In such cases uninstallation of software can be done without leaving clutter on your system. So in general I do not fear that my system gets slowly cluttered and broken as time passes. That makes it not a big deal trying out a lot of different applications while looking for a particular feature set.

Even although I already did a major distribution upgrade, I still have the impression of having a clean system. I would never ever upgrade a Windows PC on the other hand and always doing a clean install of the OS. That said, lately I read that many people also recommend doing a clean install for Linux too. I noticed that there has not been a migration to Ext4 in my case so I am still on ext3. Probably I am doing a clean install the next time also. But before doing this I will test how fast I can be then in transferring data, configuration and same set of applications.

Another big advantage is that under Linux there is no fragmentation happening slowly as time passes by - an issue that slows down every Windows PC after a while and one needs to defragment the machine over night (because a long process) from time to time.

But the most important thing is: In general things work or not. So far I did not experience issues as "sometimes it works, sometimes not" and the like.
There is either a bug or you are missing some library (which might not have been correctly added to dependencies). But applications do not interfere with each other in the way that an installation of one thing breaks another (as known from other so-called operating systems) - at least I never faced such an issue.

I had a single exception so far - experiencing an issue recently of the type "sometimes does not work" with my mobile Internet stick where I had to unplug and re-plugin it again sometimes. That happened on Ubuntu 9.04 Jaunty Jackalope. After upgrading it did only work when it was plugged in already when starting up the machine. And this is already the only thing I can really complain regarding reliability: This was/is a problem with some Huawei modems not working any more on Ubuntu 9.10 Karmic Koala - a Linux Kernel problem in general (see bugs #446146 and #449394). For me (for the E160G that shows up in lsusb as E220/E270 - at least in my case) the problem was solved with a firmware update of my Huawei-Modem which I found strange as it was working on Ubuntu 9.04 previously. But anyway, this is a serious problem that should not happen. There are many people for whom the mobile internet connection is the only they have and with braking it they even can't get their updates (and a probable fix for this problem) any more. That said, I have seen Windows machines where the mobile Internet wasn't working either.

But this and some other minor issues I had initially with Karmic Koala have been solved with the second update wave about two weeks after release. Never seen a first service pack on Windows within two weeks after a release!


General architecture:
I do consider the Linux operating system as simple and straightforward. Although there are plenty of modules, services and appropriate configuration files, there are rules of what is saved where and very most configuration files (if not all) is saved in human readable text format.

Also the driver system to me seems to be much better (from the little I do know about it) on Linux - but this might be just my personal impression.

On Linux there is less flexibility in the permission system, but it is clearer. I had so many permission issues on Windows where searching for the missing permissions was a painful work. However, this implies that for some requirements it might be a little more difficult to get the effects you want if you want high sophisticated permissions. There is the "acl" package with its commands getfacl and setfacl to manage enhanced permissions (if the core file permission features are not enough for you).

Linux was just or mostly commandline for a long time but what it means still today is: Everything can be done on the commandline. This means, that integration with the OS can be done by calling system commands - so not necessarily integration of system APIs required.


Hardware compatibility:
This is really the biggest issue. Not all hardware is supported. I tested Linux (different distributions) on machines where I got only a black screen. When you are thinking of moving to Linux then you should take care of buying compatible hardware (gadgets).
Before blaming Linux because of that think a moment: Microsoft is "outsourcing" most driver development to the vendors and Apple has a very reduced set of hardware pieces they need to support (what you get there is more vendor-lock-in). On Linux a big part of the drivers is developed by the community and a smaller set of vendors that have noticed the rising interest of people in Linux.


Software compatibility:
On Linux there is the focus on open standards and open formats. This is a big advantage if you think of long term archiving of your data or long term compatibility of software components. I have files created with applications that I cannot read any more with current software or the newer versions still running on recent Windows versions got too expensive for me. Only thing: There are several multimedia formats with some law glued on them to restrict usage. To make them work on Linux some additional packages must be installed manually for most distributions (Linux Mint, which is Ubuntu based, ships them by default).


Overall conclusion:
I am very happy of my decision and I cannot imagine to switch back.

Using Ubuntu at work is clearly a bigger challenge than just using at home, in particular, if the company infrastructure is not looking at compatibility either. I was lucky because at the company where I am working the used network printers are all HP and HP printers are generally quite supported under Linux - for your company you might be interested in having a look at linuxprinting.org.

The biggest issues that I had was with the different remote desktop support tools in use by our customers. I did not notice before that so many products for remote desktop support do not have a Linux version available. For me one of the most widely used - TeamViewer - didn't either work using Wine although people in forums claimed it should. I found Yuuguu (it's like Skype for screen sharing) being free and available under different operating systems and I use this now as far as possible.

There are a few apps left (like TeamViewer or some internally developed internal apps) that only run (well) on Windows. For those I have installed a VirtualBox running Windows (a license I got from the company).

A very important consideration is: If you would like to switch to Linux as a company or as an advanced Windows user then this is a long-term project. In my case I was keeping an eye onto alternatives to Windows since - let's say - about 2005. When I noticed how quickly Linux improved from release to release (in that time looking mostly at Fedora) I started to prefer applications that are available on Windows AND on Linux when I searched for new applications to fit my IT needs. So when I finally switched most applications I do now use on Linux I already used on Windows for a longer time (like Firefox, Thunderbird, Open Office, VLC, VirtualBox, Freemind, MySQL and so on). For a company that heavily relies on some applications that are chained to Windows an option is always to use one Windows Terminal Server and users do their Windows stuff that way (which is a usual practice for companies switching over). In general a company should favor applications that do work on both platforms because even if still favoring Windows under current conditions, it might be subject to change. So take the sure way and be platform independent.

BTW: The title of this post was borrowed from goinglinux.com - a podcast I can highly recommend for beginners and even more advanced users. I am a subscribed listener and enjoy every episode (only the computer America episodes do contain a lot of interruptions - even if the commercials are stripped - so I skip them often).

Related posts: Why Linux?, Why I switched to Ubuntu, Cross platform solutions, About Dell, The operating system, The Open Source idea, New year's IT resolutions, Software on speed, Ubuntu 10.04 experiences, Small Business Boom, Ubuntu compatible hardware, The Dell Latitude 2110 and Ubuntu 10.04.1, User lock down, The community, Popular Ubuntu desktop myths, Why companies do not use Linux on the desktop, Distribution choice.