Pros and cons of cloud solutions

Whenever a new (IT) hype is getting attention, I am careful with jumping on the train. Often it is just old stuff in new clothes and often big disadvantages are involved doing things in a completely new way. What also always happens in such a hype case is that first definitions are refined and cleared up, but later on everybody claims to offer products that match the hype (yes, when DMS (Document Management Systems) were the hype, printer manufacturers claimed doing DMS - and they were somehow right ;-) ). For a description of what "the cloud" is, look up "Cloud computing" at Wikipedia.

I am using "cloud" solutions (or let's say web applications offered by external service providers) myself. I tested and tried many, when they are still free but nowadays the providers start to charge for those. I still use several Google services (Google search, GMail, Google Calender, Google Reader and Blogger of course), mite (not free any more for new users), Vyew (occasionally) and some social networking sites (XING, Facebook, LinkedIn). I have dumped many others because of several reasons.

The pros of cloud based solutions:
  1. The application and data is available from everywhere (every machine you are using).
    Well, if you have a home network plus fix IP address (or using DynDNS or similar service), but:
  2. You don't need to do maintenance work (apply security updates, ensure power-supply, repair broken hardware etc).
    OK, you can rent a server (space) somewhere with different levels of maintenance work to be done by you.
  3. The applications scale well (usually those services run on infrastructure that allows scaling).
    OK, if only you are using the application (and maybe some family members and friends) you might not need any scaling, but if you are a company (maybe a startup that expects or hopes for fast growing) this might be a real advantage.
  4. Costs
    If you or your company is quite big and/or using a lot of software there might be several servers required and maintaining a smaller or larger server farm can be costly (just think of ensuring power supply and keeping spare parts of used hardware pieces like hard disks and so on). Cloud service providers are specialized on providing computing power and with the mass they can do it cheaper. I think this is the real driver for the rise of cloud based solutions (what a finding, that most things are driven by money...). Especially when you have only peek times where increased computing power is needed (e.g. some batch jobs running at night), it might be costly to have many servers up-and-running doing nothing most of the time.

The cons of cloud based solutions:
  1. Privacy
    You do not know what the service provider is doing with your data. At best I would think they are only running some statistics to see where you make your business, what are customers buying from you (in the meaning of what products and services are going best) and so on. I think that depending on the service you are using there are usually some interesting statistics to be made that could be sold for money to consulting companies. If the service provider is not trustworthy, I could even imagine that they could sell more detailed data.
  2. Influence
    This disadvantage does not (fully) apply if you are using services like Google app engine or EC2 because there you have more influence on what you are running (more or less your application or entire virtual machine that is just running on their hardware). Using an in-house solution you usually have more influence on the system configuration and the software, you are using. I have instances of phpBB, Wordpress, vTiger and others running. You can apply plugins as you like and have access to the source code (to apply fixes or some really personal adaptions). You can make backups and snapshots as you like, directly manipulate the database and so on. And you can pay developers for implementing your own requirements - which can be very important because how do you expect offering outstanding services, if you just use what everybody else is using? Using a service provider, your possible influence is usually very limited and you have the same functionality that everybody else has, who is using the service.
  3. Security
    This disadvantage again does not (fully) apply if you are using services like Google app engine or EC2. But within an externally hosted application you are often sharing resources or the whole the application with other users. Who does guarantee you, that there is no security issue that allows one user to see private data from another? This is an additional security issue category apart from those regarding security hacking from outside.
  4. Dependency
    You can only hope that the price of the service is kept at a reasonable level and that the provider company is not going bankrupt or somehow else stopping the service (even big companies are closing down services from time to time because they do not make enough money with them).
  5. Performance
    Why performance? You can get a lot of performance by just renting more computing power? - The truth is: The performance increase you can get is for external users of your services. Remember, that with the solution in the cloud also the relevant data also needs to be there (or at least transferred back and forth more often) - and for you the data is then far away (internet is very slow in comparison with local access). Or if you host just the data inhouse, you anyway may get a performance issue (this time the access from outside if you don't have a fast database running on fast and enough hardware).
    For example: We have outsourced our FTP server for software installation package downloads and now putting a new version online takes a lot longer than before.
    Apart from the network traffic, even my laptop is already a dual core (bought more than a year ago), friends already have quad-core laptops. And using web applications in the cloud, there is nearly nothing to do with that enormous computing power (with just the browser running to surf the internet).
    In many cases it is definitely more efficient to run the applications local and just send the data over the (inter)net where necessary. Implementing a web-application that offers - let's say Word/Excel/Powerpoint-like features, require much server power although that computing power is already available locally. And I can see already a tendency back from web applications to full clients. But this begins to get off-topic because this is rather a thick vs thin client comparison - we had that already. - Final word to the performance: If you really need much computing power, then the cloud can really help you.
Final words: For big SAAS (Software As A Service) providers cloud based solutions are THE way to go to fit variable usage, costs and scalability. BTW: In theory a big company could also maintain an own server-farm for use and sell computing power in a cloud-like fashion. For a small startup the cloud is attractive because you can get going without investing too much in your own IT environment. Although, when you use a little more services than just Salesforce for example then several services can sum up in costs and lack of integration between each other. For a lot of mid-sized companies I do not see that relevance of cloud based solutions. And last but not least: Plenty of companies see their business contacts and business data as crucial for their success so they want to keep it very private (or they might be even legally bound to keep things private) - or they simply want to be flexible and reduce dependencies or vendor lockin respectively.

Last but not least: Implementing a solution in a cloud-enabled way requires additional considerations and work due to the problems coming along with computing in the cloud, so developing an application enabled for the cloud is usually more expensive (although depending on what exactly you are going to transfer to the cloud - that is not necessarily always a whole application, it can be a single task/job also).

Related posts: Web vs Thick client, Your holy machine, Surveillance, privacy (NSA, PRISM, ...) and encryption.


    Administrator ethics

    Whenever the sudo command is used the first time on a Linux box (this is the command to run things with administration/root permissions, the following text is shown:
    We trust you have received the usual lecture from the local System Administrator. It usually boils down to these three things:
    1. Respect the privacy of others.
    2. Think before you type.
    3. With great power comes great responsibility.
    Whenever I read this, I must say: I like that. And it is applicable to far more than just IT...


    Screencast recording on Ubuntu 10.04 (and 9.10)

    Creating screencasts is not only modern - often a short 2-minute video can be more helpful as a 2 page documentation.

    There are two major screencast recording tools available in Ubuntu. That said, I always had troubles with them because I do want sometimes different behaviour or I had issues with the sound. I want to give a very short overview to make it more easy for you to choose the right tool in the right moment and to get the output you want:
    • gtk-recordmydesktop
      Very easy interface
      + Works out-of-the-box
      - Option to follow mouse available and keeps the mouse always in the center of the frame (I think, this disturbs, when watching the video).
      - Cannot export/save as MPEG-4 or AVI or anything else - just .ogv (the open format)
    • XVidCap
      + Can produce different formats (MPEG-4 is the default, but you can switch to .ogv also)
      + Option to follow mouse available and keeps the frame steady until you move the mouse near the frame border (this way it is more easy for the user to follow the video instructions - you should only make sure that the border of the video is not to near to your primary working area otherwise you get a similar effect as with gtk-recordmydesktop)
      - The GUI is slightly less intuitive
      - Using the package from the Ubuntu 9.10 or 10.04 repository does not bring you sound support (at least not when trying to create MPEG-4). You have to install the .deb package manually by downloading it from sourceforge (current version is yet 1.1.7 - although the same version seems to be in the repository, that is a different one and does not work - strange, but fact).

    I basically use both applications. When I produce local screencasts for end-users that I just save on their local machine for later reference I use gtk-recordmydesktop and just leave it as ogv. That used to work even when Skype + TeamViewer is running and I did the recording remotely. It even recorded my sound coming through Skype! When I want to upload a video to YouTube (or vimeo and others), XVidCap is better because it immediately produces MPEG-4 which fits better for YouTube and similar services (I tried uploading an ogv file from gtk-recordmydesktop to YouTube which failed to replay correctly - indeed it is not mentioned on the appropriate help page).

    Especially for the case(s) you prefer gtk-recordmydesktop you might need to convert the created video to the finally desired format. I have seen many ways (reading blog posts and watching screencasts), e.g. using WinFF, VLC Media Player and using Devede. Using Devede was the only one that worked for me (see http://vodpod.com/watch/3665200-converting-your-ogv-files-to-mpeg4-).

    In any case you should have the ubuntu-restricted-extras installed (using synaptic package manager or sudo apt-get install ubuntu-restricted-extras) for doing video or audio stuff. I would also install ffmpeg.

    I also usually have to put the sound input to a high level otherwise when replayed on several machines (e.g. notebooks or external monitors with poor speaker power), even putting the volume to highest level brought only low level sound.

    Final note: It might be necessary to quit Skype (if you have it running) to avoid sound or video juddering.

    Related post: Ubuntu 10.04 with docking station.


    Ubuntu 10.04 with docking station

    After installing Ubuntu 10.04 I lost proper dual monitor handling for my use case. Basically I want to use my notebook on the road as is and in the office I want to put it into the docking station while having the lid always closed.

    Ubuntu 10.04 Lucid Lynx does not handle this properly any more. I found out that also in 9.10 it was probably pure luck for me because I upgraded from 9.04 and had some fixed entries in the xorg.conf.

    But anyway, the xorg.conf file is legacy and there is now gdm with xrandr.

    I found out that many have the same use case as I have and have problems. Some solutions are particular to NVidia cards, but I have an Intel. Thanks to Nylex in the Linuxquestions-Forum + Ubuntu XOrg documentation, I could find the solution:

    I had to put this into the file /etc/gdm/Init/Default before the line starting with /sbin/initctl (why the TV1 is there also look below in update 3):
    xrandr | grep "HDMI1 connected "
    # 0 is returned on success
    if [ $? -eq 0 ]; then
    xrandr --output HDMI1 --mode 1680x1050 --rate 60 --primary --output LVDS1 --off --output TV1 --off --output VGA1 --off --output HDMI2 --off --output DP1 --off --output DP1 --off --output DP2 --off --output DP3 --off
    xrandr --current | grep "VGA1 connected "
    if [ $? -eq 0 ]; then
    xrandr --output VGA1 --mode 1680x1050 --rate 60 --primary --output LVDS1 --off --output TV1 --off --output HDMI1 --off --output HDMI2 --off --output DP1 --off --output DP1 --off --output DP2 --off --output DP3 --off
    xrandr --current | grep "TV1 connected "
    if [ $? -eq 0 ]; then
    xrandr --output LVDS1 --mode 1440x900 --rate 60 --primary --output HDMI1 --off --output TV1 --off --output VGA1 --off --output HDMI2 --off --output DP1 --off --output DP1 --off --output DP2 --off --output DP3 --off

    What it does is: Using xrandr it probes whether an external monitor is attached (whether HDMI or VGA) and if so it turns off the LCD display of the laptop so it is not used. You need to adapt the resolution to fit your needs.

    I had some cases when (maybe because of timing at startup) it did not work (sometimes it worked after gdm service restart then but not always) added --preferred and --primary to the xrandr call above and I added the script below to System->Preferences->Startup applications:
    xrandr --current | grep "HDMI1 connected "
    # 0 is returned on success
    if [ $? -eq 0 ]; then
    sleep 5s
    xrandr --output HDMI1 --mode 1680x1050 --rate 60 --primary --output LVDS1 --off --output TV1 --off --output VGA1 --off --output HDMI2 --off --output DP1 --off --output DP1 --off --output DP2 --off --output DP3 --off
    xrandr --current | grep "VGA1 connected "
    if [ $? -eq 0 ]; then
    sleep 5s
    xrandr --output VGA1 --mode 1680x1050 --rate 60 --primary --output LVDS1 --off --output TV1 --off --output HDMI1 --off --output HDMI2 --off --output DP1 --off --output DP1 --off --output DP2 --off --output DP3 --off
    xrandr --current | grep "TV1 connected "
    if [ $? -eq 0 ]; then
    sleep 5s
    xrandr --output LVDS1 --mode 1440x900 --rate 60 --primary --output HDMI1 --off --output TV1 --off --output VGA1 --off --output HDMI2 --off --output DP1 --off --output DP1 --off --output DP2 --off --output DP3 --off

    [Update 2]:
    It still did not work 100%. Sometimes when I moved the mouse outside the desktop it was gone so I found the laptop monitor being still active (so I could move windows over there and they were gone). I could immediately notice that after login when the workspace icons on the bottom right were displayed with increased width. It helped, explicitly turning of LVDS1 display (the laptop screen) in the script (see appropriate options in the updated script above).
    [Rest of update 2 removed because of update 3]

    [Update 3]:
    It turned out that (occasionally) on boot it recognized another monitor to be active, after first being the LVDS1 it happened also being the TV1 (occassionally) - and I don't know why because I never attached a TV to it. Anyway, I have added the options to turn that off explicitly also to the scripts (see updated gdm default script portion and startup script above).

    Apart from that sometimes occassionally a re-evaluation of available displays might be triggered by an application (e.g. opening and closing the laptop lid or manually calling xrandr without parameters or starting arandr or other applications when started the first time). Such a re-evaluation can also occur right after graphical login and then your initial gdm xrandr modifications are forgotten. Therefore it helps having that script at hand (I have it in /opt) and added to the gnome menu under "Other" to run it when necessary on demand.

    Apparently also a timing issue remained (I assume that because of the fact, that the startup applications are started in no particular order and all asynchroneously). However, adding a line with "sleep 5s" to the script (only to the additional script and not to the gdm init!!!) helped. I currently use a 10 seconds delay (subject to change maybe) but in your case less might be sufficient or more might be required (for me 5 seconds was not enough to make it work each boot).

    For you it might be necessary to switch additional displays explicitely off (as I did with TV1 for example).

    Oh, and BTW:
    In System->Preferences->Power Management make sure that lid closed action is "Blank Screen". I made a lot of tests with setting to nothing (only possible via gconftool-2) but after all maybe the above setting is the better one.

    If in your case the resolution(s) are not set correctly you can do this here in this script also. Use man xrandr or look at https://wiki.ubuntu.com/X/Config/Resolution#Adding%20undetected%20resolutions for more information on the appropriate xrandr commands.
    I noticed some flickering and login screen displayed in lower resolution, so I added --mode 1680x1050 to the xrandr call in both cases (in both scripts) which is the resolution of my external monitor.

    Honestly, using this script, I feel better than previously on Ubuntu 9.04 or 9.10 where I got it to work by accident after moving around monitors in the config (although in reality they were positioned the other way round). As I found out today, I wasn't either the only one with that experience (see http://ubuntuforums.org/showthread.php?t=1110407) although I am used to be the non-standard user. - With my use case of having the lid closed all the time when in the docking station I am by far not alone as I noticed and some either downgraded to 9.10 because they did not get it to work.

    Many, many thanks again to Nylex and the Ubuntu documentation team!

    Related links:
    Related posts: Ubuntu 10.04 with docking station part 2, Firefox change default page format, Ubuntu 10.04 Experiences, OpenOffice and LibreOffice starts slow.