Lately I often suppressed my gut feeling in some technical issues.
Finally I stumbled over a video that completely supports my gut feelings:
Jim Zemlin, boss of Linus Torvalds is talking about the lessons learned working with Linux:
Lesson 1: Don't dream big
Lesson 2: Give It away
Lesson 3: Don't have a plan
Lesson 4: You don't always have to be nice
Showing posts with label Linux. Show all posts
Showing posts with label Linux. Show all posts
2013-07-23
2012-08-07
The truth about hardware support
Since the time I first started using Linux at home I know that one must be careful when choosing hardware to avoid pain when installing Linux.
When people say that Windows supports more hardware than Linux I always confirmed from my own experience.
But: Linux - out-of-the-box supports more hardware than Windows does (out-of-the-box)! Microsoft "outsourced" most hardware support to the vendors and when you buy new hardware with Windows preinstalled, vendors did the job in getting everything to work!
Lately I wanted to help out a new co-worker reinstalling Windows on his work laptop (HP Pavilion g6). There was an extra partition prepared by the vendor which probably contained possible required drivers. However, somehow it was inaccessible so we couldn't get drivers from there. After a clean Windows 7 installation: No WLAN, no sound and no ethernet either! After long search on the net (from another machine of course), my co-worker found the most important download (ethernet driver) on a separate site from HP for businesses (after finally also identifying the exact sub-model of the g6) - more than 100 MB download - for a freakin' ethernet card!
After that I was so frustrated loosing so much time just to get the normal ethernet to work (let alone WLAN and the rest), that I left the rest up to him. Later in the evening he called me about activating Windows and Office and I could not get to the Microsoft action pack site because somehow the login did not work any more.
The next day he arrived at the office with Ubuntu installed on the HP Pavilion g6 - everything worked out-of-the-box - no single extra driver required and of course fully usable (without the need of activating any software)!
But this is not always the case. There are plenty of vendors that do not write drivers for Linux and many even do not publish the specifications so that somebody else could write the driver. If there is an open source driver - or at least a free driver available, Linux already contains it, where on Windows you need to get separate driver setups or CDs from the appropriate box or vendor site.
There are currently software updates running on a Dell Latitude E6530 next to me. As usual, all I need to tell Dell: I need a laptop and I don't pay the Microsoft tax, I will install Ubuntu on it and the hardware must support it. I don't want and don't need to search forums for possible problems, I can rely on Dell shipping fully supported hardware - everything out-of-the-box - also no additional drivers required.
Related posts: Ubuntu compatible hardware, About Dell, The hardware.
When people say that Windows supports more hardware than Linux I always confirmed from my own experience.
But: Linux - out-of-the-box supports more hardware than Windows does (out-of-the-box)! Microsoft "outsourced" most hardware support to the vendors and when you buy new hardware with Windows preinstalled, vendors did the job in getting everything to work!
Lately I wanted to help out a new co-worker reinstalling Windows on his work laptop (HP Pavilion g6). There was an extra partition prepared by the vendor which probably contained possible required drivers. However, somehow it was inaccessible so we couldn't get drivers from there. After a clean Windows 7 installation: No WLAN, no sound and no ethernet either! After long search on the net (from another machine of course), my co-worker found the most important download (ethernet driver) on a separate site from HP for businesses (after finally also identifying the exact sub-model of the g6) - more than 100 MB download - for a freakin' ethernet card!
After that I was so frustrated loosing so much time just to get the normal ethernet to work (let alone WLAN and the rest), that I left the rest up to him. Later in the evening he called me about activating Windows and Office and I could not get to the Microsoft action pack site because somehow the login did not work any more.
The next day he arrived at the office with Ubuntu installed on the HP Pavilion g6 - everything worked out-of-the-box - no single extra driver required and of course fully usable (without the need of activating any software)!
But this is not always the case. There are plenty of vendors that do not write drivers for Linux and many even do not publish the specifications so that somebody else could write the driver. If there is an open source driver - or at least a free driver available, Linux already contains it, where on Windows you need to get separate driver setups or CDs from the appropriate box or vendor site.
There are currently software updates running on a Dell Latitude E6530 next to me. As usual, all I need to tell Dell: I need a laptop and I don't pay the Microsoft tax, I will install Ubuntu on it and the hardware must support it. I don't want and don't need to search forums for possible problems, I can rely on Dell shipping fully supported hardware - everything out-of-the-box - also no additional drivers required.
My recommendation: Even if you don't plan yet to use Linux, tell your vendor when buying a new PC or laptop that you want the hardware to be Windows AND Linux compatible. If you plan to use Windows: Hope that you don't need to reinstall yourself grabbing all the required drivers from the internet!
Related posts: Ubuntu compatible hardware, About Dell, The hardware.
2012-04-10
A few Linux related videos
Here are a few easy and overview videos related to Linux that might increase your interest:
Ubuntu spot:
How Linux is built:
Linux vs Windows (in brief):
Linus Torvalds: Why Linux is not successful on the Desktop:
Ubuntu TV:
Ubuntu for Android Demo:
Linux is better than Windows:
Unity technology overview:
Microsoft Office vs OpenOffice / LibreOffice:
Linux does what Windows does not:
10 reasons Windows 8 will fail:
Related posts: Why Linux?, Going Linux, The Open Source idea, User lock down, The community, Popular Ubuntu desktop myths, Why companies do not use Linux on the desktop, Distribution choice.
Ubuntu spot:
How Linux is built:
Linux vs Windows (in brief):
Linus Torvalds: Why Linux is not successful on the Desktop:
Ubuntu TV:
Ubuntu for Android Demo:
Linux is better than Windows:
Unity technology overview:
Microsoft Office vs OpenOffice / LibreOffice:
Linux does what Windows does not:
10 reasons Windows 8 will fail:
Related posts: Why Linux?, Going Linux, The Open Source idea, User lock down, The community, Popular Ubuntu desktop myths, Why companies do not use Linux on the desktop, Distribution choice.
2012-04-05
Choosing a programming language
Currently - after a very long period I am again into the programming language decision which I was not expecting.
Changing programming language is a big deal and you shouldn't do that every 2 years. When you search the web you will find recommendations to learn many languages and learn a new each 2 or 3 years. I find this totally silly. To get really productive with a programming language, takes at least a year and of course you would like to get the most out of it regarding ROI (return on investment).
When I did evaluate programming languages the last time, it was a 3-step way:
Now, about 4 years later there is nothing wrong with my decision. My decision is still that. With the rising of alternative operating systems the importance of Java has gained (on the server side Windows definitely already lost for enterprise applications at least) and many server-applications go Java to be platform agnostic. Apart from that the Java world is huge. Microsoft's .net is growing also, but still far from that (regarding size and quality of libraries and community).
The trigger for my latest search for a programming language is that I have a few very small programs (running on the client) to write (they are not "real" applications, just tiny programs for particular small needs). I found Java - and .net or Mono as well just too big for such tiny stuff. In my particular case they are Windows specific needs. A few of those needs I already solved by just writing VBScripts. That was ok for the GUI less needs. Now I have a few little needs for small GUIs. And that again brought me to a brief look around.
And indeed that is the single parameter ("should fit for very tiny requirements also") I did not include in my former decision back in 2008. And good it was I think because finding the programming language that fits for really everything 100% is not realistic. It is even not realistic to think that a programmer nowadays can survive just knowing one language - but: It is important to keep in mind that no one can achieve the same level of expertise in all used languages.
So this post can be seen as an addition to my main pro-Java decision - the programming languages that are helpful in addition to Java.
But what to choose, if a little GUI is needed?
If you search for a platform independent development platform, you could look at Free Pascal and Lazarus IDE as it creates native code (so just take the executable and run it instead of writing packages or setups that manage plenty of dependencies) and is fast. There is one problem with this approach: On Windows (in my case) using COM components (not to talk about .net) is not well supported and possible only with quirks (not tried myself, I just read about that). That is the reason why this is not an option for me in my current situation. If your application does not need to tightly integrate into the Windows ecosystem Free Pascal gives you multi-platform development (same code, just need to be compiled for/on each platform).
After all, still core technology is C(++) and Code::Blocks is an IDE available for all major platforms (for wxWidgets projects the wxSmith seems to be the most capable GUI builder, you need separately install wxWidgets - at least on Windows). Or anyway you can either use NetBeans using external designers to build the GUI for Linux development. I have developed quite a lot C++, but maaany years ago and today I simply had problems getting Code::Blocks to work seamlessly with wxwidgets (design worked, but compilation finished with configuration errors). What I found on the net related to my errors was from about 2008 partly not matching my environment. I gave up on this but for those succeeding I want to mention this option.
Last but not least I still also see the option to use SharpDevelop with .net for the single reason of time-to-get-started and seemless integration into the Windows ecosystem - and this combination by the way is the only mentioned one that is bound to windows only. If you think of Mono and MonoDevelop then be warned about the differences! Creating platform independent applications with C# is not as seamless as you might think! Using MonoDevelop on Windows (MonoDevelop can compile against .net or Mono) brings more platform independence but you loose the Windows integration (COM/activex support at minimum level - I find the Java-COM-interop even better; registry access and stuff like that). The very important point here is the Windows integration - it's the one and core argument for this option!
I was about to write a paragraph on speed but didn't want to write that without a single test after more than 3 years of not checking that. Surprisingly a minimum GUI test lead to the following result: Cold start on a virtual Windows 2008r2 machine is 5 seconds for both - .net as well as java. A second start is 1-2 seconds - either for both. I then tried a Java test application with a little more GUI to find out that (warm) start is 4 seconds - not bad either. Surprisingly Java 1.6 update 30 and .net runtime 4.0 seem to bring a similar user experience at least from startup behaviour. Many still say, Java is slow - far missed!
But I may not forget that I need to access activeX/COM components for my small work which makes it more feasible doing it with .net because .net simply integrates better here - as already mentioned. Of course there are options when using java - for example - my favorite com4j (which I tried for several COM components in the past where it worked well). Although I never tried to embed activex controls into a swing component - and that does not seem trivial in Java - see here.
Needless to say that I would prefer Java for 100% of the work if it would be easier to deal with COM components and if it would integrate nicely with the Windows stuff. Java with NetBeans is basically the only combination that I really love to develop with. Everything else lacks in IDE features, is difficult to set up or the community is small and tiny amount of available components.
And of course there is my general tendency to avoid Microsoft technologies whereever I encounter them. The classic Visual Basic was one of the longest continued stuff of Microsoft, even although there were signifficant changes between VB 3.0 and VB 4.0 (with switch to 32 bit). When I look at the last years there were unusable first attempts of .net with first Windows forms and then WPF (see a discussion here), Microsoft pushed a lot of newer GUI styles over the years with Ribbon interfaces or now the Metro GUI where you need to use a new GUI language and software companies continously need to adapt or rewrite parts of the application). Would I have used Java since the late ninetees I would have experienced a completely different continuity. Microsoft managed it very well, to drive developers .net without those getting aware that they are again caught in a one-way-street with a dead end. Just because industry follows Microsoft - at least on the client side - in most areas, I need to accept that I can't stay completely outside the windows specific (VBScript and .net) stuff. I will take care to keep it at a minimum. This means, that for my tiny programs I will most probably go with .net just because of the lack of other options.
For those who like dynamic languages, I want to mention Python. One of it's core advantages in my opinion is that it runs on many platforms but comes along with Windows extensions on Windows. This means: When you need to do Windows stuff you can continue to use Python - of course using the Windows stuff (COM and Windows API for example) means that (at least that part) of your program is then bound to Windows-only use. For platform agnostic programming there are bindings for wxWidgets for Python. However, the IDE's I tried were all poor in features or stability - I tried Eric, SPE and Idle back in 2008 - a short look tells me, that there is still a lack of GUI designers (e.g. Glade for Windows seems near to discontinued) - so I cannot really recommend a particular IDE - you can have a look yourself - here is a list of Python IDE's. Unfortunately deployment for Python programs on Windows is not as easy as for .net or Java.
To round up this post: For building setups for your Windows applications I can recommend the Innosetup application as a good mix of flexibility and ease-of-use. For creating Linux packages see official documentation for creating .deb packages (Debian, Ubuntu, Mint, ...) and here for creating .rpm packages (redhat, Fedora, ...).
Related posts: The programming language, The IDE and the libraries, Install NetBeans on Ubuntu 10.04, Java vs .net/C#.
Changing programming language is a big deal and you shouldn't do that every 2 years. When you search the web you will find recommendations to learn many languages and learn a new each 2 or 3 years. I find this totally silly. To get really productive with a programming language, takes at least a year and of course you would like to get the most out of it regarding ROI (return on investment).
When I did evaluate programming languages the last time, it was a 3-step way:
- Collecting all options with the result of a hand full remaining for further analysis.
- Keep an eye on activity and evolution of the results from step 1.
- Detailed analysis of the remaining options and choose.
- I prefer static typed languages over dynamic ones for several reasons (e.g. less error prone, YMMV).
- I don't want to code user interface - I have coded GUI since I was an 8 year old boy and I was about 14 when I got GUI designers (those times still using MS DOS) so hand-coding GUI is for me like returning to stone-age and so that is a no-go for me if a GUI designer is missing.
- I don't like language hopping and because of the very dynamic requirements of my software projects I need a programming language that can be used for quite all realms - so all the domain-specific languages are excluded for my needs.
- My applications are usually plugin/addon enabled which means that a customer must be able to develop those for special needs on his own - without additional costs. That means, my favor goes to languages that are free (and open source) including the IDE used for development.
- The language should not be tight to a particular operating system.
Now, about 4 years later there is nothing wrong with my decision. My decision is still that. With the rising of alternative operating systems the importance of Java has gained (on the server side Windows definitely already lost for enterprise applications at least) and many server-applications go Java to be platform agnostic. Apart from that the Java world is huge. Microsoft's .net is growing also, but still far from that (regarding size and quality of libraries and community).
The trigger for my latest search for a programming language is that I have a few very small programs (running on the client) to write (they are not "real" applications, just tiny programs for particular small needs). I found Java - and .net or Mono as well just too big for such tiny stuff. In my particular case they are Windows specific needs. A few of those needs I already solved by just writing VBScripts. That was ok for the GUI less needs. Now I have a few little needs for small GUIs. And that again brought me to a brief look around.
And indeed that is the single parameter ("should fit for very tiny requirements also") I did not include in my former decision back in 2008. And good it was I think because finding the programming language that fits for really everything 100% is not realistic. It is even not realistic to think that a programmer nowadays can survive just knowing one language - but: It is important to keep in mind that no one can achieve the same level of expertise in all used languages.
So this post can be seen as an addition to my main pro-Java decision - the programming languages that are helpful in addition to Java.
For Windows development you should know VBScript and on Linux shell scripting or Python or Perl for the small scripting stuff.
But what to choose, if a little GUI is needed?
If you search for a platform independent development platform, you could look at Free Pascal and Lazarus IDE as it creates native code (so just take the executable and run it instead of writing packages or setups that manage plenty of dependencies) and is fast. There is one problem with this approach: On Windows (in my case) using COM components (not to talk about .net) is not well supported and possible only with quirks (not tried myself, I just read about that). That is the reason why this is not an option for me in my current situation. If your application does not need to tightly integrate into the Windows ecosystem Free Pascal gives you multi-platform development (same code, just need to be compiled for/on each platform).
After all, still core technology is C(++) and Code::Blocks is an IDE available for all major platforms (for wxWidgets projects the wxSmith seems to be the most capable GUI builder, you need separately install wxWidgets - at least on Windows). Or anyway you can either use NetBeans using external designers to build the GUI for Linux development. I have developed quite a lot C++, but maaany years ago and today I simply had problems getting Code::Blocks to work seamlessly with wxwidgets (design worked, but compilation finished with configuration errors). What I found on the net related to my errors was from about 2008 partly not matching my environment. I gave up on this but for those succeeding I want to mention this option.
Last but not least I still also see the option to use SharpDevelop with .net for the single reason of time-to-get-started and seemless integration into the Windows ecosystem - and this combination by the way is the only mentioned one that is bound to windows only. If you think of Mono and MonoDevelop then be warned about the differences! Creating platform independent applications with C# is not as seamless as you might think! Using MonoDevelop on Windows (MonoDevelop can compile against .net or Mono) brings more platform independence but you loose the Windows integration (COM/activex support at minimum level - I find the Java-COM-interop even better; registry access and stuff like that). The very important point here is the Windows integration - it's the one and core argument for this option!
I was about to write a paragraph on speed but didn't want to write that without a single test after more than 3 years of not checking that. Surprisingly a minimum GUI test lead to the following result: Cold start on a virtual Windows 2008r2 machine is 5 seconds for both - .net as well as java. A second start is 1-2 seconds - either for both. I then tried a Java test application with a little more GUI to find out that (warm) start is 4 seconds - not bad either. Surprisingly Java 1.6 update 30 and .net runtime 4.0 seem to bring a similar user experience at least from startup behaviour. Many still say, Java is slow - far missed!
But I may not forget that I need to access activeX/COM components for my small work which makes it more feasible doing it with .net because .net simply integrates better here - as already mentioned. Of course there are options when using java - for example - my favorite com4j (which I tried for several COM components in the past where it worked well). Although I never tried to embed activex controls into a swing component - and that does not seem trivial in Java - see here.
Needless to say that I would prefer Java for 100% of the work if it would be easier to deal with COM components and if it would integrate nicely with the Windows stuff. Java with NetBeans is basically the only combination that I really love to develop with. Everything else lacks in IDE features, is difficult to set up or the community is small and tiny amount of available components.
And of course there is my general tendency to avoid Microsoft technologies whereever I encounter them. The classic Visual Basic was one of the longest continued stuff of Microsoft, even although there were signifficant changes between VB 3.0 and VB 4.0 (with switch to 32 bit). When I look at the last years there were unusable first attempts of .net with first Windows forms and then WPF (see a discussion here), Microsoft pushed a lot of newer GUI styles over the years with Ribbon interfaces or now the Metro GUI where you need to use a new GUI language and software companies continously need to adapt or rewrite parts of the application). Would I have used Java since the late ninetees I would have experienced a completely different continuity. Microsoft managed it very well, to drive developers .net without those getting aware that they are again caught in a one-way-street with a dead end. Just because industry follows Microsoft - at least on the client side - in most areas, I need to accept that I can't stay completely outside the windows specific (VBScript and .net) stuff. I will take care to keep it at a minimum. This means, that for my tiny programs I will most probably go with .net just because of the lack of other options.
For those who like dynamic languages, I want to mention Python. One of it's core advantages in my opinion is that it runs on many platforms but comes along with Windows extensions on Windows. This means: When you need to do Windows stuff you can continue to use Python - of course using the Windows stuff (COM and Windows API for example) means that (at least that part) of your program is then bound to Windows-only use. For platform agnostic programming there are bindings for wxWidgets for Python. However, the IDE's I tried were all poor in features or stability - I tried Eric, SPE and Idle back in 2008 - a short look tells me, that there is still a lack of GUI designers (e.g. Glade for Windows seems near to discontinued) - so I cannot really recommend a particular IDE - you can have a look yourself - here is a list of Python IDE's. Unfortunately deployment for Python programs on Windows is not as easy as for .net or Java.
To round up this post: For building setups for your Windows applications I can recommend the Innosetup application as a good mix of flexibility and ease-of-use. For creating Linux packages see official documentation for creating .deb packages (Debian, Ubuntu, Mint, ...) and here for creating .rpm packages (redhat, Fedora, ...).
Related posts: The programming language, The IDE and the libraries, Install NetBeans on Ubuntu 10.04, Java vs .net/C#.
2012-03-12
Efficient desktop environment
I consider myself as a power user. Every day (and sometimes also nights ;-) ) I make intensive use of computers to get things done. Of course I am not a farmer - I am working in IT business. - However, intensive use of computers is by far not limited to IT people.
Of course, for people spending a major part of their time in front of a computer lousy software has a bigger impact on efficiency. While many people can live with the fact to reinstall their Windows PC every 6 months, I get angry when some of my most often used features take two clicks to much as it could be.
There are two major kinds of computer users: Those who are using one or two applications most of the time and those who use a larger set of applications. To the first group people belong who will respond to the question "Which operating system are you using?" with something like "Word - Microsoft Word". ;-) - While for the first group the underlying OS is of minor relevance, for the latter group of people using a bunch of applications, the underlying operating system can be a critical factor.
I am an Ubuntu user and in the Ubuntu world the last months were full of discussions about the new desktop environment developed by Canonical, namely Unity. Many argued about bad user experience. I myself did play around with a lot of additional components to bump up my desktop. I tested several dock components like Cairo-Dock, AWN and others. Although I finally did not use any of those due to other reasons (stability, performance or simply no time to tweak it to fit my needs perfectly), I had a quite nice configuration with Gnome 2 and AWN on my Ubuntu 10.04 machine. Surprisingly that configuration looked quite similar to Unity.
Many of my peers switched to Linux Mint which is Ubuntu-based but is going for a different strategy regarding desktop look and feel. If you like Ubuntu, but don't like Unity, you can either use "classic" Gnome3 or install Cairo-Dock which then offers an option to go with Classic Gnome + Cairo Dock right on login (at least starting with 12.04 beta 1) and I even managed to create an AWN session with the help of TuxGarage. (The example there is outdated - you need to look at your current ubuntu.session file and take this as a sample or look below in the comments on that post).
However I found - after testing a while - that Unity fits best for me - at least with the least effort to put into getting it efficient. Important from my point of view is that with mouse OR keyboard everything can be reached quickly, that includes: Virtual desktops, Launchers, Open Application Windows, Menus, File system.
Out-of-the-box Unity offers a lot of cool hotkeys, besides ALT+TAB for switching between open applications you can use ALT+^ to switch between open windows/instances of the same application or after ALT+TAB you can collapse and expand application windows with the UP/DOWN keys. Pressing and releasing ALT offers the HUD menu (F10 still brings you to the normal menu). STRG+ALT+LEFT/RIGHT switches between virtual desktops. The only thing I immediately missed was a quick way to switch desktops with the mouse. My former way of configuring this was installing compizconfig-settings-manager and configuring desktop wall accordingly to switch to next and previous desktop doing a right-click on the left or right edge on the screen. That does not work any more when Unity is active on the left. I did not change that to now use left and right mouse button on the bottom edge and that works. In addition to that I reduced the icon size to 32 (can be done using compizconfig Unity plugin, installing MyUnity or also via commandline).
People who don't like Unity have different reasons but one might be the dock-style (which even Windows adopted later). The dock with launcher and window list in combination has one big advantage: The icons are always on the same position - no matter in what order you launch them. This is essential if you open a lot of applications during the day and end up in continously searching your app windows. Although I used to hate window grouping, Unity behaves differently whether you click on a different application launcher or not - which I found reduces necessary clicks.
These and a few other things I missed in all the other desktop environments - like e.g. configuring different times - not just one (I like to see New York or other time zones when clicking on the clock). Although other dock components have a lot more customizing options and features, I find Unity simpler and I found minor bugs in Cairo-Dock and AWN which resulted in my decision that I do not want to bother with finding my own fully customized X-Session and then probably experiencing more troubles. Would have tried longer if I would have found Unity unacceptable. But: After all my tests I still find Unity the best.
I definitely find that the time of a classic task bar (as known from Windows XP, KDE, XFCE or LXDE) is over - mostly because of the unsure icon position in a classical taskbar and the fact that the first thought always needed to be: "Did I already start this or not?" - depending on the answer a different icon had to be clicked. And even on larger screens it is annoying to waste screen space with additional panels (quickstart and windows). My attempts to get a combination of XFCE or LXDE with Cairo-Dock or AWN working well together failed because I either had some crashes, or too many panels remaining. BTW: XFCE comes with a bottom launcher which only is set to autohide by default.
There are still a few things I would like to see in Unity - like easier configuration of the unity launchers or including a classic Gnome menu launcher by default, but I think that Unity is on a good way - I got familiar with it quite fast and so new users will, I think. Of course many people find many things to tweak after a first installation of Ubuntu. I probably will come up with my one set of tweaks after the final Ubuntu 12.04 LTS came out...
Related posts: Popular Ubuntu desktop myths, Why companies do not use Linux on the desktop, Ubuntu 12.04 LTS Precise Pangolin optimized.
Of course, for people spending a major part of their time in front of a computer lousy software has a bigger impact on efficiency. While many people can live with the fact to reinstall their Windows PC every 6 months, I get angry when some of my most often used features take two clicks to much as it could be.
There are two major kinds of computer users: Those who are using one or two applications most of the time and those who use a larger set of applications. To the first group people belong who will respond to the question "Which operating system are you using?" with something like "Word - Microsoft Word". ;-) - While for the first group the underlying OS is of minor relevance, for the latter group of people using a bunch of applications, the underlying operating system can be a critical factor.
I am an Ubuntu user and in the Ubuntu world the last months were full of discussions about the new desktop environment developed by Canonical, namely Unity. Many argued about bad user experience. I myself did play around with a lot of additional components to bump up my desktop. I tested several dock components like Cairo-Dock, AWN and others. Although I finally did not use any of those due to other reasons (stability, performance or simply no time to tweak it to fit my needs perfectly), I had a quite nice configuration with Gnome 2 and AWN on my Ubuntu 10.04 machine. Surprisingly that configuration looked quite similar to Unity.
Many of my peers switched to Linux Mint which is Ubuntu-based but is going for a different strategy regarding desktop look and feel. If you like Ubuntu, but don't like Unity, you can either use "classic" Gnome3 or install Cairo-Dock which then offers an option to go with Classic Gnome + Cairo Dock right on login (at least starting with 12.04 beta 1) and I even managed to create an AWN session with the help of TuxGarage. (The example there is outdated - you need to look at your current ubuntu.session file and take this as a sample or look below in the comments on that post).
However I found - after testing a while - that Unity fits best for me - at least with the least effort to put into getting it efficient. Important from my point of view is that with mouse OR keyboard everything can be reached quickly, that includes: Virtual desktops, Launchers, Open Application Windows, Menus, File system.
Out-of-the-box Unity offers a lot of cool hotkeys, besides ALT+TAB for switching between open applications you can use ALT+^ to switch between open windows/instances of the same application or after ALT+TAB you can collapse and expand application windows with the UP/DOWN keys. Pressing and releasing ALT offers the HUD menu (F10 still brings you to the normal menu). STRG+ALT+LEFT/RIGHT switches between virtual desktops. The only thing I immediately missed was a quick way to switch desktops with the mouse. My former way of configuring this was installing compizconfig-settings-manager and configuring desktop wall accordingly to switch to next and previous desktop doing a right-click on the left or right edge on the screen. That does not work any more when Unity is active on the left. I did not change that to now use left and right mouse button on the bottom edge and that works. In addition to that I reduced the icon size to 32 (can be done using compizconfig Unity plugin, installing MyUnity or also via commandline).
People who don't like Unity have different reasons but one might be the dock-style (which even Windows adopted later). The dock with launcher and window list in combination has one big advantage: The icons are always on the same position - no matter in what order you launch them. This is essential if you open a lot of applications during the day and end up in continously searching your app windows. Although I used to hate window grouping, Unity behaves differently whether you click on a different application launcher or not - which I found reduces necessary clicks.
These and a few other things I missed in all the other desktop environments - like e.g. configuring different times - not just one (I like to see New York or other time zones when clicking on the clock). Although other dock components have a lot more customizing options and features, I find Unity simpler and I found minor bugs in Cairo-Dock and AWN which resulted in my decision that I do not want to bother with finding my own fully customized X-Session and then probably experiencing more troubles. Would have tried longer if I would have found Unity unacceptable. But: After all my tests I still find Unity the best.
I definitely find that the time of a classic task bar (as known from Windows XP, KDE, XFCE or LXDE) is over - mostly because of the unsure icon position in a classical taskbar and the fact that the first thought always needed to be: "Did I already start this or not?" - depending on the answer a different icon had to be clicked. And even on larger screens it is annoying to waste screen space with additional panels (quickstart and windows). My attempts to get a combination of XFCE or LXDE with Cairo-Dock or AWN working well together failed because I either had some crashes, or too many panels remaining. BTW: XFCE comes with a bottom launcher which only is set to autohide by default.
There are still a few things I would like to see in Unity - like easier configuration of the unity launchers or including a classic Gnome menu launcher by default, but I think that Unity is on a good way - I got familiar with it quite fast and so new users will, I think. Of course many people find many things to tweak after a first installation of Ubuntu. I probably will come up with my one set of tweaks after the final Ubuntu 12.04 LTS came out...
Related posts: Popular Ubuntu desktop myths, Why companies do not use Linux on the desktop, Ubuntu 12.04 LTS Precise Pangolin optimized.
2011-10-27
The individual desktop
I can see an interesting movement on the market: Many Apple iPhone users seem to lurk towards a Mac when they think of buying a new computer. - And some actually do.
And in my neighbourhood I can count already 6 Macs in the first minute trying to count them.
I find it a good thing, that Macs are increasing. With more people using Macs, companies need to start considering that there is not only Windows and world is colorful.
There is just one thing that I need to make clear: Many Mac users think that they are a special individual with their extraordinary computer and this is not true IMHO.
I can see the following different strategies:
I think, the Apple way is not the worst, but for those who work a lot with the computer it might be worth to invest a little more time finding the best fitting environment. If you want to be really individual, Linux is the way to go - just search youtube - e.g. for "my Linux desktop" or "top linux distros" to get an idea what people do with Linux.
Related posts: Distribution choice, Popular Ubuntu desktop myths.
And in my neighbourhood I can count already 6 Macs in the first minute trying to count them.
I find it a good thing, that Macs are increasing. With more people using Macs, companies need to start considering that there is not only Windows and world is colorful.
There is just one thing that I need to make clear: Many Mac users think that they are a special individual with their extraordinary computer and this is not true IMHO.
I can see the following different strategies:
- Microsoft: Keep compatibility to keep market share but try to offer new GUI stuff that feeds people's enthusiasm and keep some flexibility.
- Apple: Focus on usability and don't make the user think or choose.
- Linux: Be open and flexible. Everybody should be able to use it as desired.
I think, the Apple way is not the worst, but for those who work a lot with the computer it might be worth to invest a little more time finding the best fitting environment. If you want to be really individual, Linux is the way to go - just search youtube - e.g. for "my Linux desktop" or "top linux distros" to get an idea what people do with Linux.
Related posts: Distribution choice, Popular Ubuntu desktop myths.
2011-10-03
Smart Backup on Linux
About two years ago after setting up a Linux server I was asked to create a backup of the last seven days. Reason for this was the idea that if something gets deleted or corrupted it might not be noticed immediately.
I searched the Internet and found a really smart solution - you can find details here on mikerubel.org (rsync snapshots). My final script looks like this:
If you want to go more into detail - Michael Jakl has posted a variant here (rsync time machine).
Currently in my use case the live part takes 149GB and the whole backup of 7 days takes 161 GB (instead of more than a TB!)
I know, disk space is quite cheap, but the rising amount of data also requires a lot! - In my case:
The problem was: The hard disk did not have enough free space to hold 7 times the data and at the moment there was no external hard disk available.
I searched the Internet and found a really smart solution - you can find details here on mikerubel.org (rsync snapshots). My final script looks like this:
#!/bin/bash
rm -rf /data/autobackup/backup.7
mv /data/autobackup/backup.6 /data/autobackup/backup.7
mv /data/autobackup/backup.5 /data/autobackup/backup.6
mv /data/autobackup/backup.4 /data/autobackup/backup.5
mv /data/autobackup/backup.3 /data/autobackup/backup.4
mv /data/autobackup/backup.2 /data/autobackup/backup.3
mv /data/autobackup/backup.1 /data/autobackup/backup.2
cp -alv /data/autobackup/backup.0 /data/autobackup/backup.1 > /data/autobackup/0to1.log
rsync -av --delete /data/live/ /data/autobackup/backup.0/ > /data/autobackup/last-rsync.log
If you want to go more into detail - Michael Jakl has posted a variant here (rsync time machine).
Currently in my use case the live part takes 149GB and the whole backup of 7 days takes 161 GB (instead of more than a TB!)
I know, disk space is quite cheap, but the rising amount of data also requires a lot! - In my case:
- I already have several external drives (2 HDs where HD images of my work notebook are saved alternating).
- Two separate external drives where data only is saved with rsync (to one disk more often to other about once in two months).
- I have a 500GB external HD where data is outsourced that I do not need that much or simply consumes too much of my primary HD. That one was not saved in the early times. In the meantime I do sync it to a second one from time to time (people have a lot of external HDs these days but I am not sure if they backup those too...).
2011-09-04
Distribution choice
Lately I am testing a lot of different Linux distributions for the desktop.
There were some discussions about Unity (the new default desktop used in Ubuntu) and Gnome3 (the new Gnome default desktop) and that made me think and test other distributions.
Maybe I should point out the major differences seen in different flavors of Linux. These are the core points where they are different:
For those who are now overwhelmed with options and cannot decide, remember that you can install either all desktop environments together on one machine and decide what you want to use at each login! You can either use Gnome for example but install and run applications written for KDE either while logged in using Gnome Desktop. You can run a mix of Gnome or KDE etc applications, whatever desktop environment you are currently using. So there is no exclusive OR in desktop environment or particular applications. The only thing that is not interchangeable (at least not so easy) is just the package management system - which is usually of secondary importance for the normal user.
Regarding hardware choice there are two options:
For choosing a distribution, my advice is:
Here are the distributions I tested since 2005 (skipping all those I only took a very short look at):
Although I am pretty convinced of the stability of a Debian server, on the desktop even the testing version contains outdated program versions. (On my last tests the second website I visited complained about outdated browser ;-) ).
These are currently my favorite distributions:
Remember that depending on your type of job and the needs it brings - and depending on personal flavors you might find a different distribution or desktop environment to fit best!
Related posts: Popular Ubuntu myths, Why I switched to Ubuntu, Going Linux, The individual desktop.
There were some discussions about Unity (the new default desktop used in Ubuntu) and Gnome3 (the new Gnome default desktop) and that made me think and test other distributions.
Maybe I should point out the major differences seen in different flavors of Linux. These are the core points where they are different:
- Base distribution from which it is derived from.
Many distributions are based on others, only a few are doing everything from scratch in their own way. Some are based on Debian or Redhat for example. Somehow relevant is then also the company that stands behind the distribution as main contributor (if there is one particular company behind). - Package Management System - some use apt (debian) some yum (redhat) for example)
- Default Desktop environment and window managers used (can be Gnome, KDE, XFCE, LXDE, Fluxbox etc etc).
- Default packaged applications (the set of applications installed by default when installing the distribution).
- Core objective (be it use as a Server, on the desktop, on routers or net storage systems, etc).
- Hardware support. Although all Linux distributions share the same core parts (Kernel), different distributions are based on different versions of the kernel, apply different patches and some do add several drivers where no source code is available (while others do strictly include only open-source drivers). Because of these differences not all distributions support all sets of hardware. So it is very probable that this last point is the most important one.
- Oh, and there are plenty of navigation bars that can be used - for example AWN, Cairo Dock, Docky and many, many more. Those are GUI elements for application launchers, taskbar management and things like that. Here you have to choose - if you are not satisfied with what your favorite distribution brings - mixing them is not a good idea...
- Comparison of X Window System desktop environments
- List of major Linux distributions
- List of software package management systems
- Ubuntu compatible hardware
- Dock applications for Linux (Going Linux Listener Feedback)
For those who are now overwhelmed with options and cannot decide, remember that you can install either all desktop environments together on one machine and decide what you want to use at each login! You can either use Gnome for example but install and run applications written for KDE either while logged in using Gnome Desktop. You can run a mix of Gnome or KDE etc applications, whatever desktop environment you are currently using. So there is no exclusive OR in desktop environment or particular applications. The only thing that is not interchangeable (at least not so easy) is just the package management system - which is usually of secondary importance for the normal user.
Regarding hardware choice there are two options:
- Inform yourself, what hardware is supported by your favorite Linux distribution and buy those - or
- Go to any shop of your choice and just tell them that you want a machine that is compatible with "<Put Linux distribution of your choice>" and if not you will throw them their piece back on the counter.
For choosing a distribution, my advice is:
- Look at the screen shots of different distributions. Those you find horrible to look at in general, are probably the ones you would like less. That said, often it is sufficient to switch the theme to get a much more friendlier desktop.
- Watch demo videos of different desktop environments on youtube (or other channels).
- Try them. Most Linux distributions offer Live-CDs for download. That means: You download a CD image, burn it on CD and then boot the computer with that CD that offers to start Linux without changing your current installation - everything is run from the CD. That is of course slower as if it would run from the hard disk, but doesn't change anything on your current machine.
- Search the repositories (software center or however it is called in the distribution) for applications you might want to try. Don't search for applications by typing "Microsoft Word" - no - try "word processor" or instead of "Excel" or "Photoshop" try "Spreadsheet" and "photo editor". The idea is to use search keywords that describe what you want to do. The reason for this is: For your favorite photo editor you used on Windows might not exist a Linux-version. But there might be plenty of other applications doing the same stuff on Linux.
Here are the distributions I tested since 2005 (skipping all those I only took a very short look at):
- Fedora (with which I started in about 2005)
- Ubuntu (my current primary OS in the office and at home in version 10.04 with latest updates). I also tested other flavors like Kubuntu (Ubuntu with KDE), Xubuntu (Ubuntu with XFCE), Lubuntu (Ubuntu with LXDE).
- Mint (including different flavors)
- Puppy Linux
- Debian (Stable and Testing)
- Zorin OS
Although I am pretty convinced of the stability of a Debian server, on the desktop even the testing version contains outdated program versions. (On my last tests the second website I visited complained about outdated browser ;-) ).
These are currently my favorite distributions:
- Ubuntu (of course, as it is my primary OS, based on Debian)
- Mint (based on Ubuntu)
- Zorin OS (also based on Ubuntu)
- it quite nicely implements a very Windows 7 like desktop environment. Those who like the Windows 7 taskbar, will like the Zorin OS. Of course it brings a cleaner menu and a package manager - things you don't get on Windows 7 ;-) .
Remember that depending on your type of job and the needs it brings - and depending on personal flavors you might find a different distribution or desktop environment to fit best!
Related posts: Popular Ubuntu myths, Why I switched to Ubuntu, Going Linux, The individual desktop.
2011-07-30
Implementing effective computer security
I am really surprised, how safe people feel in their daily computer work.
Whereever I hear people speaking about viruses and computer security they are quite convinced that a virus scanner is sufficient for secure computing. I can say, that I have removed a lot of viruses from a lot of PCs. Whenever I found an infected PC and ran several antivirus tools in parallel, I got different opinions of the scanners about how many and which viruses were found. If you ask real experts, you will get the answer that
Apart from that you do not only have viruses. You are surfing to sites in the web that may have got infected and run spying and malicious code in addition to the original website code. A co-worker of mine got infected by a virus by surfing to his online-banking software which got hijacked by a hacker. (Of course, it was a Windows virus...).
In addition to the viruses which come with no or very few action needed to be taken by the user, hackers and spammers try to convince users (by email for example) to take more action, like sending over money or adding malicious code even to their own web pages or browsers (e.g. https://www.facebook.com/topic.php?uid=31987371885&topic=14985).
I am the first one getting angry when I see too much security. The computer is an important and powerful tool. Whenever I need to work on a machine with limited permissions, I get easily angry if something does not work just because disabled. But: Everyone should implement a little security!
On Windows the easiest is to:
Related posts: Why companies do not use Linux on the desktop, Going Linux.
Whereever I hear people speaking about viruses and computer security they are quite convinced that a virus scanner is sufficient for secure computing. I can say, that I have removed a lot of viruses from a lot of PCs. Whenever I found an infected PC and ran several antivirus tools in parallel, I got different opinions of the scanners about how many and which viruses were found. If you ask real experts, you will get the answer that
- there are differences in quality of virus scanner software and that
- no virus scanner finds all viruses - and last but not least
- they cannot search for all ever known viruses all the time (because this would simply take too much time - so during normal scans they usually search for the currently most common found viruses.
Apart from that you do not only have viruses. You are surfing to sites in the web that may have got infected and run spying and malicious code in addition to the original website code. A co-worker of mine got infected by a virus by surfing to his online-banking software which got hijacked by a hacker. (Of course, it was a Windows virus...).
In addition to the viruses which come with no or very few action needed to be taken by the user, hackers and spammers try to convince users (by email for example) to take more action, like sending over money or adding malicious code even to their own web pages or browsers (e.g. https://www.facebook.com/topic.php?uid=31987371885&topic=14985).
I am the first one getting angry when I see too much security. The computer is an important and powerful tool. Whenever I need to work on a machine with limited permissions, I get easily angry if something does not work just because disabled. But: Everyone should implement a little security!
On Windows the easiest is to:
- install A virus scanner like AVG, Avast, BitDefender or other.
- use Firefox or Chrome instead of Internet Explorer to browse the internet.
- get the Addons Adblock Plus and NoScript. The latter can be quite annoying because of many websites not displayed well by default. While not very liked by most end users, I find it being very effective in relation to the additional work necessary. It is a good tool to avoid effects of cross-site-scripting.
- use an E-Mail client that allows text-only display of messages. Again this might make your emails not very nicely displayed but shows you the real link (in HTML-mails the displayed link can differ from the one called when you click it) and keeps you free from a lot of typical e-mail viruses. Outlook is not the right tool at this point (no version of it). One option is Thunderbird (which I personally love because of the many options and long list of plugins available).
Related posts: Why companies do not use Linux on the desktop, Going Linux.
Why companies do not use Linux on the desktop
There is a very long running discussion on LinkedIn with the headline "Why aren't more corporations using Linux as a desktop OS?" and after a while I had the feeling of the same reasons and arguments returning again and again.
So I went over all the comments again and tried to categorize the opinions (trying to filter out those who already replied earlier with same or similar arguments.
Here is the result with the 15 most mentioned reasons of not using Linux on the desktop - from the opions of the discussing people (the red ones I consider in fact being issues to be solved, my comments in italic):
Related posts: Popular Ubuntu desktop myths, Implementing effective computer security, Going Linux, Efficient desktop environment.
So I went over all the comments again and tried to categorize the opinions (trying to filter out those who already replied earlier with same or similar arguments.
Here is the result with the 15 most mentioned reasons of not using Linux on the desktop - from the opions of the discussing people (the red ones I consider in fact being issues to be solved, my comments in italic):
- Slick Microsoft PR, Windows is the defacto standard or simply inherited monopoly. (18 votes)
This is a fact, but not a reason for using Windows. If so, we would still ride on horses and don't have cars - just because horses were a monopoly for traveling (related to 11.)...
- Missing appropriate software on Linux (for particular needs) / Vendor Lock-in. (18 votes) - Mentioned were in detail: Branch specific, special used apps not platform independent and/or not integrated well, AD, Deployment, AS400, Sage, Meeting- and Conference-Software, Photoshop, Exchange-Integration, Smart-Phone-Integration, AutoCAD, Screen Reader, ...
Yes, this is really an issue - and that is because the fact mentioned above in 1. and hence followed by 6. The result is that many developers and software companies still just focus on Windows.
- Compatibility issues Linux-Windows in software when collaborating. (13 votes) - Mentioned were in detail: MS Office vs OpenOffice/LibreOffice, some proprietary formats can't be read, Domain-Integration, Some websites are IE only, General compatibility-fears.
The MSO vs OOL/LO issue is a really big one because many people write many documents and many people need to collaborate. There are many compatibility issues - already between different versions of Microsoft Office. The 2007 and 2010 docx, xlsx, pptx formats (yes, we have already two variants of the *x-formats) introduce a lot of possible conversion/open/save problems. I could write a long blog post just about those issues. Domain integration I do not consider important as I find the whole domain system outdated looking at current company structures (increasing cooperations between separate companies for example) and how they evolve. To develop IE-only websites nowadays is still done - although completely free of sense.
- Too much tech-knowhow needed (just for nerds, servers and/or for commandline junkies). (11 votes)
In reality, if you want good work to be done, on Windows you also need much in-depth knowhow. It's just, that quite every guy or girl, spent hours in front of the computer gaming, already considers himself/herself a computer guru...
What simply is not true is, that you need to be a commandline junkie or a nerd to use Linux. Those days are a long gone - Linux has graphical environment!
- People already know Windows and are simply resistant to change (and will struggle). (10 votes)
Most people I met who showed resistance when I talked about Linux, weren't interested that much because they do not use computers that much - use it only when necessary for writing an email or surf some website. Those can be considered to be resistant to computers in general. Although even those could have a better computer experience going Linux, it's usually best to just let them continue until the next virus has biten their OS to death.
- Lack of awareness / Ignorance or simply decision of management. (9 votes)
Yes, is an issue. People simple don't open their eyes. This point is somehow related to 5. I found that most people currently in management, grew up with Windows. That's simply their comfort zone and usually they are so busy and so convinced of themselves that they simply don't consider anything different. This will change over the years, when more people grow up with Macs or Linux machines. I do trust, that even without active marketing, people will notice the advantages plus their current suffering and move over slowly.
- (Migration) costs (get it to work, experience, train users etc). (9 votes)
Related to 10. Of course, when considering a big change (and changing OS in a company is a big change anyway), investment is needed. I also have invested a lot of time into learning Linux, which only at the beginning is really tough until you understood some core things. And for the admins there is a lot more to learn than for the users. In my case I was so annoyed by Windows and the continuous suffering that - when I started - I was sure, it will be worth the investment - and it was! And honestly: Switching from XP to Windows 7 or from Office 2003 to 2010 is also eating a lot of money and requires additional training for the users.
- Missing Linux Knowhow (and not enough time to dig into it). (7 votes)
Related to 7. Of course, in the beginning there is missing know-how. But seriously: Switching from Windows Server 2003 to Server 2008r2 also required additional know-how - even worse - you think, you know it, but then you oversee some relevant changes (like the syswow64 registry hive and separate 32-bit executables in that folder) during your first attempts. You have to struggle with the new versions because Microsoft urges you by quitting support for the older OS versions. In reality you don't even have time to dig into the new Windows details, isn't it? - New details, new problems...
- Hardware compatibility issues. (7 votes)
Yes, you need to take care (which the normal default user does not - or cannot - when buying a new PC). I had problems with particular WLAN-cards, Bluetooth-adapters, sound and video cards first of all. Issue can be widely reduced by buying officially certified hardware for example (either by Canonical or the hardware vendor for example). Dell and Lenovo for example are vendors known to be very compatible with Linux (anyway you need to look at the particular model or just ask the vendor or partner of your choice). I usually recommend people: When buying, tell them that you want a Linux-compatible model - otherwise you will return it back).
- Less educated specialists/administrators available or cost more. (7 votes)
Related to 7. - I am pretty sure that a real good Windows administrator will also ask more. By tendency, there are more people working in Windows environment, but total number of real specialists I think is not significantly more than in Linux world. But I understand, that this can be an issue for a medium-sized or small company just watching out for the cheap administrator around the corner. - People don't want to run risks and do what the others do (using the market leader). (6 votes)
Related to 1. Of course, if you do something new and you fail, people might argue and ask why the hack you didn't follow what the "expert" says. When you do something new then of course you make mistakes - but you will learn and get know-how. But you can do what fits for you. Doing, what the major part of others do (or recommend), you will never get, what really fits really good for your company. - Better support (because you pay for the software and have a contract). (6 votes)
This is completely wrong! You can also get payed support in open source world and a commercial company does never give you the guarantee of continued maintenance. In fact I already invested into software technologies and then the vendor discontinued the product (without selling it to somebody else - just let it die). The history of Open Office and Libre Office shows us, that open source is even the better path here. Oracle thinking of unacceptable changes? - The project was forked quasi in an instant! As long there are some folks being interested in it, it can continue to life - even if it's just you - it's your choice.
- Windows already there on the shipped PC. (6 votes)
I have never ever kept the default installation on a PC - neither when I was still using Windows. In most cases it already started with partitioning that was not fitting my desires or needs. Next could be OS language or preinstalled software. And a Linux installation can be either done by a novice - it's easy (at least the Ubuntu, Mint or Fedora distributions - beside others). The only very annoying thing with a Windows preinstalled is: You already paid the license fee to Microsoft in that case and I am pretty sure, that Microsoft is not sad about it, if you overwrite your OS with Linux because: You don't consume bandwith, don't call support, don't ask stupid questions in forums etc - you pay without either asking a service for it... - Too much confusion because many distros, desktop environments. (6 votes)
Are you confused and don't know what to use? - No problem, I do recommend Ubuntu with it's default desktop - and choose the LTS version. If you don't have an idea what you might want, Ubuntu for sure won't be a bad choice. But the point is: You have the choice! People work in different ways, have different priorities and have different jobs. You might discover later that you prefer a different distribution. You may consult distrowatch.com. At this point I still find the Ubuntu distribution the most stable one (regarding the complete set of applications existing around) offering everything I need regarding features and additional packages.
- Windows (and apps) looks/works better, is easier to use and/or has better/more features. (6 votes)
Simply not true. Why? Regarding the look: There are so many nice themes you can choose - for sure you will find one that you find cool. And of course you can choose a totally different desktop or window manager - there are so many around (you get an overview at wikipedia). Just combine the desktop of your choice with the theme of your choice - watch, what people do show on youtube! Finally you can either make your Ubuntu look like Windows XP or Windows 7 - watch this video. - Regarding the features: A standard Windows installation is totally barefoot and poor in relation to a standard Ubuntu installation. There isn't either Office installed by default (if you didn't buy it alltogether with MS Office included). People who find Linux has poor features sometimes think of particulare Windows software not available or running on Linux - that is more related to 2.
Related posts: Popular Ubuntu desktop myths, Implementing effective computer security, Going Linux, Efficient desktop environment.
2011-05-03
IT fallout and buddhism
Today I had two thoughts about today's usual IT landscapes:
- Windows is like radioactive fallout on the IT level. People accept it and somehow knowing about the problems and then get killed (through stress and burnout for example) by the effects - slowly and without them either noticing.
- I need to take buddha as an example to accept the common IT flaws. Although he has seen the light, he returns in his free decision to this world to help people in their endless suffering. I somehow also feel to have seen the light because I see how things could be done better, but anyway I face Windows again and again at customers. I need to see this as an opportunity to assist people in their suffering. In most cases people are not able to finish their suffering and see the light... ;-)
2011-03-02
Popular Ubuntu desktop myths
Some opinions around Ubuntu on the desktop - or Linux in general seem to be never dying although they should be really outdated. A few are:
Related posts: Document file format, The community, Going Linux, The Open Source idea, Why companies do not use Linux on the desktop, Distribution choice, Outlook Calendar Meetings, The individual desktop, Efficient desktop environment, A few Linux related videos.
- Linux/Ubuntu is only for PC freaks.
This is simply not true. Ubuntu/Linux is at least easy to use as Windows. For the normal user it is less painful for sure: You are faster up and running (because after installation also many apps are already there), you don't need to search your shitty licence code you probably already thrown away accidently a year ago, you don't get penty annoying questions when first starting your browser, you can install easily whatever language you need for whatever family member or friend who might visit you and you want to share the computer with, you can buy Ubuntu already installed on a USB stick that you can carry around and use it on whatever computer you go, you can easily use any file as a template and many, many more. - Installation of applications is difficult (need to be compiled).
There are distributions that compile everything from source, but even there you don't need to do it on your own. Anyway, most distributions don't require compiling things on your own (however, this is always an option you have, even if mostly not recommended). - Installation of software is usually simpler than on Windows: You start a program (menu entry) which contains a list of available applications, you can search, scroll through the list, just tick the checkbox in front of the application(s) you like to install and click "Apply". Download and installation done automatically then. For uninstall - guess what: Just uncheck and apply again. If you are funny, you can search the internet for a more up-to-date installation package or other application not in the repositories and install it by double-clicking it after download just as you do it on Windows. What is so difficult then? Even a trained monkey can do it. ;-) - Everything must be done on the command line.
Most things nowadays can be done through a graphical user interface also. One reason why much is done on the commandline: It is far easier in a forum or blog post to respond to a question with a few commands that just need to be pasted to a terminal window than it is to explain it with plenty of clicks and menu items (with the ribbons situation is even worse). For the person that receives the help it is also easier. And other than on Windows, everything you can do on the graphical environment you can also do on the commandline. That does not mean you have to do it that way. - Linux looks old-fashioned on the desktop.
Look yourself and decide: http://www.ubuntu.com/desktop/why-use-ubuntu, http://www.kde.org/screenshots/, http://www.xfce.org/about/screenshots, http://lxde.org/image/tid/1, http://www.linuxmint.com/screenshots.php, http://www.gnome3.org/, http://www.enlightenment.org, - All this can be used in Ubuntu - you can choose!
Oh, you want Mac-style toolbar on the bottom - or Windows 7 like - look here for example: http://www.omgubuntu.co.uk/2010/04/you-choose-the-best-dock-for-ubuntu-poll/ - again: You choose, what you find efficient and cool!
What is true, that some older applications are looking ugly because they use old interfaces (sometimes when searching in the repository and try new apps I happen to get such an old-style thingy), but the major part looks very nice. - There are so many variants (of Ubuntu itself and even more if you look at other Linux distributions) so there will be never be a standard.
The core for Linux is the same for all and the most differences are not in the system but just in the desktop environment or GUI presentation. But this is for your freedom and you can anyway start multiple applications using different GUI frameworks. Keep in mind that all people like to work differently. Everybody has his own preferred way of doing things. Hence should be you computer: It should support you by doing the work they way you like! - Ubuntu is not compatible with Windows.
This can't be said so generally. It is applications that are not compatible, file formats or protocols and you have to look at those separately. For example: Many widely used file formats (as JPG, PNG, TIFF, PDF, RTF, MP3, AVI and many more) can be handled without problems on Ubuntu. Even the DOC, XLS, PPT Formats can be handled quite good in the meantime. It is Microsoft that continously invents new formats and changes existing formats (see the latest DOCX which is different in MS Office 2007 and 2010, so that you need to save it in a compatible form from within 2010 if you want it to be displayed in the same manner with 2007 - .docx is not .docx!). Ubuntu better supports more open standards than Windows does (e.g. look at the MIME-Format - this is the format used to send e-mails over the internet - and look what you get from Outlook)! - Ubuntu is developed only by students in their spare time.
Canonical is the company behind Ubuntu, but Ubuntu is based on Debian (look in particular at http://www.debian.org/partners/ for example) and that all is Linux. And many big companies are contributing to Linux! So, if this ever was true, it is not any more since quite a while (see http://www.techrepublic.com/article/linux-standard-gains-big-name-backers/5365462 for example). - Ubuntu is free and hence must be very buggy.
Wherever software is developed, bugs are introduced. There are not more or less bugs in Ubuntu than on Windows - IMHO. Windows is used by more people and companies and hence there is a higher probability that bugs get found but for Ubuntu it is extremely easy to get involved, report bugs and get in touch with the developers and hence get the bugs fixed. So I think we are pretty at the same level here and there although I think, Ubuntu is more stable (of course ;-) ). And why it is free is because of the Open Source idea where people pay for work being done (new features, bugfixes,...) and not for ... (yes for what?). See "The Open Source idea" for details.
Related posts: Document file format, The community, Going Linux, The Open Source idea, Why companies do not use Linux on the desktop, Distribution choice, Outlook Calendar Meetings, The individual desktop, Efficient desktop environment, A few Linux related videos.
Ignorance of the different
I haven't written since a while. The reason is: I changed job and it has been a stressy time for me.
Fortunately I found a company that accepted that I work on Ubuntu on my desktop. Of course there is Windows stuff to do - which I will do from a virtual machine or via remote desktop.
During my job search and at other occasions I wondered about the big ignorance (it is not always plain unawareness) of people working in the Windows and Microsoft environment. The vast majority of users, IT staff and managers still cannot accept, that the era of the Microsoft-only IT environment will be history in a few years.
Many young people are using Macs (even if it is just because they want to be different or cooler than others in a few cases) for example. I have customers where users are using Macs in the office on their productive workstation - yes, today (I am not talking about a single case only).
And BTW: I don't know a single real person who really tried Mac or Linux and then switched back to Windows. The contrary is true, most I know, didn't look back after two weeks on Ubuntu for example. I am not talking about forum entries of people who tell they switched back to Windows - I don't give much on them after I have seen sooo many forum posts that are so evidently fakes (or paid comments - yes, there are companies where you can pay for manipulating the public opinion through forums, blog comments etc).
Oh, and I have seen people selling Android phones and when I mentioned Linux, they never heard of it - hey it runs on the thingis, you sell!
There are a few things, that are really super-annoying and I start to get angry, when:
In reality it would be sufficient to be just accepted, but if I look more deeply into it: I am working to improve customers IT experience since I was about fourteen. And from the deep of my heart, I want to serve the best for the customer. From the current view, it simply does not make sense to write new applications that only runs on Windows if there is a chance to write it in a platform agnostic way. If the application is available for other platforms also then the vendor and customer can be independent from future trends. Even very Windows centric companies nowadays allow iPhones attaching to their networks for example.
So, honestly, I must recommend using Linux, Java or any other open technology and open standard over any proprietary one. - TIFF or PDF for example has not just been adopted by accident. Documented and open standards win in the long run, such as SMTP, IMAP, SSL, MIME and plenty others, that are well established.
Related posts: IT investment, The Open Source movement, Document file format, Data file format.
Fortunately I found a company that accepted that I work on Ubuntu on my desktop. Of course there is Windows stuff to do - which I will do from a virtual machine or via remote desktop.
During my job search and at other occasions I wondered about the big ignorance (it is not always plain unawareness) of people working in the Windows and Microsoft environment. The vast majority of users, IT staff and managers still cannot accept, that the era of the Microsoft-only IT environment will be history in a few years.
Many young people are using Macs (even if it is just because they want to be different or cooler than others in a few cases) for example. I have customers where users are using Macs in the office on their productive workstation - yes, today (I am not talking about a single case only).
And BTW: I don't know a single real person who really tried Mac or Linux and then switched back to Windows. The contrary is true, most I know, didn't look back after two weeks on Ubuntu for example. I am not talking about forum entries of people who tell they switched back to Windows - I don't give much on them after I have seen sooo many forum posts that are so evidently fakes (or paid comments - yes, there are companies where you can pay for manipulating the public opinion through forums, blog comments etc).
Oh, and I have seen people selling Android phones and when I mentioned Linux, they never heard of it - hey it runs on the thingis, you sell!
There are a few things, that are really super-annoying and I start to get angry, when:
- people assume, that everything other than Windows must be crap (without either looking). How can somebody have such opinion about something they never heard of yet?
- I report a problem with a website or a service and people ask me what OS I am using (or worse: "What Windows-Version are you using?") and then tell me automatically it must be a problem with my machine or client - even if that has nothing to do with the OS that I am using.
- when I ask people for help (e.g. just tell me the proxy, I have to use, when I want to use the internet), they see my machine and they immediately tell me - hell, what is this - I cannot support this. - Hey, it's maybe just a DNS or PROXY ip address I want to know - is this really so difficult? I don't ask them to configure my machine for me - I can do it myself.
- people treat me like a fanatic or fundamentalist, just because I have pretty good reasons to not like Windows and prefer something else. I do accept that other people do not have enough problems with Windows to seriously consider a switch (if you take care you can keep even a Windows system quite stable and performant over a longer period of time - so with good people maintaining the Windows in a company, work can be quite painless with Windows either).
- especially Windows developers, who think, they know all, and in reality never looked around what is used outside their small world. The vast majority of big sites does not use neither Windows nor MS SQL Server nor .net. Some of the Windows(-only) developers I know seem to be really brainwashed.
In reality it would be sufficient to be just accepted, but if I look more deeply into it: I am working to improve customers IT experience since I was about fourteen. And from the deep of my heart, I want to serve the best for the customer. From the current view, it simply does not make sense to write new applications that only runs on Windows if there is a chance to write it in a platform agnostic way. If the application is available for other platforms also then the vendor and customer can be independent from future trends. Even very Windows centric companies nowadays allow iPhones attaching to their networks for example.
So, honestly, I must recommend using Linux, Java or any other open technology and open standard over any proprietary one. - TIFF or PDF for example has not just been adopted by accident. Documented and open standards win in the long run, such as SMTP, IMAP, SSL, MIME and plenty others, that are well established.
Related posts: IT investment, The Open Source movement, Document file format, Data file format.
2010-12-16
Ubuntu 10.04 & docking station part 2
As you might already know, I am using my Ubuntu laptop also with a docking station and external monitor. When I am in the office I dock it in leaving it closed and use only the external monitor (that is by the way a behaviour I can see at several customers also - having no fixed machine any more and dock in the closed notebook).
I had an issue I already described earlier - see part 1: Ubuntu 10.04 with docking station.
Since then I had several issues of having TV1 noticed by accident even when doing nothing or same with my internal laptop screen (LVDS1) when I am working with the external monitor only. The annoying things are three when this happens:
Now I am pretty sure that it is the screensaver that is the culprit here. Disabling the screensaver never brought the issue back again. Only drawback: When I leave the machine I manually have to lock the screen pressing CTRL+ALT+L.
Same thing of second monitor accidently recognized happened in about 75 % when starting TeamViewer (which is my favorite remote assistence tool). Today I finally filed a bug and guess what - problem already solved (thanks to Daniel Stiefelmaier at TeamViewer support)!
Related posts: Ubuntu 10.04 with docking station (part 1), Ubuntu 10.04 experiences, OpenOffice and LibreOffice starts slow.
I had an issue I already described earlier - see part 1: Ubuntu 10.04 with docking station.
Since then I had several issues of having TV1 noticed by accident even when doing nothing or same with my internal laptop screen (LVDS1) when I am working with the external monitor only. The annoying things are three when this happens:
- I can accidently move the mouse out of the window taking a while to bring it back.
- In my compiz I have defined that right-clicking on the border of the screen switches to the next desktop. That does not work any more as the border is not reached when I move the moust to the border of the visible screen.
- Windows may be displayed on the invisible part of the screen so I have them in the task bar but don't see them on screen.
Now I am pretty sure that it is the screensaver that is the culprit here. Disabling the screensaver never brought the issue back again. Only drawback: When I leave the machine I manually have to lock the screen pressing CTRL+ALT+L.
Same thing of second monitor accidently recognized happened in about 75 % when starting TeamViewer (which is my favorite remote assistence tool). Today I finally filed a bug and guess what - problem already solved (thanks to Daniel Stiefelmaier at TeamViewer support)!
Dear Martin,Yeah, that did it!
I found a possible solution to that. Please open a console and run
WINEDEBUG=xrandr teamviewer&
Then take a look at the log:
cat ~/.teamviewer/6/winelog
You should see some xrandr lines. Then run
/opt/teamviewer/teamviewer/6/bin/.regedit
Create the key HKEY_CURRENT_USER\Software\Wine\X11 Driver
there, create a String "UseXRandR" and set its value to "N".
Close the registry, and repeat the other commands:
WINEDEBUG=xrandr teamviewer&
cat ~/.teamviewer/6/winelog
Now, there should be no xrandr lines.
Related posts: Ubuntu 10.04 with docking station (part 1), Ubuntu 10.04 experiences, OpenOffice and LibreOffice starts slow.
2010-11-14
The future of Java
The last months (after Oracle has completed the merger with Sun) were full of insecurity and discussions within the Java developer communities about the future of Java. Many have blogged about their fears. I personally followed the discussions but I felt, the best thing is to wait. People are so fast with their interpretations and guesses how others will behave.
In parallel there were discussions about Apple stopping support for Java on the Mac etc etc.
Even although Oracle already told they are strongly commited to Java, NetBeans and other famous Sun products, discussions were going on.
Finally a few further commitments have come to public:
IBM joins OpenJDK
http://blogs.sun.com/mr/entry/ibm_to_join_openjdk
Oracle and Apple Announce OpenJDK Project for Mac OS X
http://www.apple.com/pr/library/2010/11/12openjdk.html
This means, that big players decided to go the Java and Open Source path. And it is important to unite the forces. Working together is far better than fighting each other.
When I decided to switch away from my Windows-only-development, one major reason was: There are many other operating systems gaining market share, like several flavors of Linux (Ubuntu, Mint, Redhat, Debian etc) and Apple - and I can't say for sure which ones will be there in the long run, but I want to provide security to my customers, that they can benefit from the software for a long time. - Java was a good choice in the past and now we can again be sure that it still is a good choice. The idea of Java - "write once, run everywhere" - has saved me from a lot of work in the past (compiling for different architectures not required, no building of different styled setup procedures etc). There are people mentioning that the write once, run anywhere phrase is not quite true, because not available for really all operating systems and often there have to be exceptions made for dealing with different operating systems. - Well, this is partly true, when it comes particular features, that might be even not available on all operating systems. Fortunately for such cases Java can be coupled with C(++). For availability: Java is available on more operating systems than most other languages (see http://www.java.com/en/download/manual.jsp).
BTW: C(++) is also a language, that is around and will be around for a long time and I hear nobody complaining that it is developing too slow for example. And I really prefer thinking carefully before putting new features into the language. There must be many things considered. It is important to do no harm to the language.
Oh, there is finally also an official podcast from Oracle for Java developers:
http://blogs.sun.com/javaspotlight/
Of course this is biased and the Javaposse (http://javaposse.com/) is still the first address when it comes to Java podcasts.
We have exact plans for the next versions of Java:
http://blogs.oracle.com/javaone/2010/09/plan_b_wins.html
http://openjdk.java.net/projects/jdk7/features/
And we get exactly what we need next for NetBeans (the first-class Java IDE):
http://netbeans.dzone.com/nb-generate-simpler-rest
Related posts: The dawn after sunset, The programming language, Popular Java myths, Java applications on the desktop, The community.
In parallel there were discussions about Apple stopping support for Java on the Mac etc etc.
Even although Oracle already told they are strongly commited to Java, NetBeans and other famous Sun products, discussions were going on.
Finally a few further commitments have come to public:
IBM joins OpenJDK
http://blogs.sun.com/mr/entry/ibm_to_join_openjdk
Oracle and Apple Announce OpenJDK Project for Mac OS X
http://www.apple.com/pr/library/2010/11/12openjdk.html
This means, that big players decided to go the Java and Open Source path. And it is important to unite the forces. Working together is far better than fighting each other.
When I decided to switch away from my Windows-only-development, one major reason was: There are many other operating systems gaining market share, like several flavors of Linux (Ubuntu, Mint, Redhat, Debian etc) and Apple - and I can't say for sure which ones will be there in the long run, but I want to provide security to my customers, that they can benefit from the software for a long time. - Java was a good choice in the past and now we can again be sure that it still is a good choice. The idea of Java - "write once, run everywhere" - has saved me from a lot of work in the past (compiling for different architectures not required, no building of different styled setup procedures etc). There are people mentioning that the write once, run anywhere phrase is not quite true, because not available for really all operating systems and often there have to be exceptions made for dealing with different operating systems. - Well, this is partly true, when it comes particular features, that might be even not available on all operating systems. Fortunately for such cases Java can be coupled with C(++). For availability: Java is available on more operating systems than most other languages (see http://www.java.com/en/download/manual.jsp).
BTW: C(++) is also a language, that is around and will be around for a long time and I hear nobody complaining that it is developing too slow for example. And I really prefer thinking carefully before putting new features into the language. There must be many things considered. It is important to do no harm to the language.
Oh, there is finally also an official podcast from Oracle for Java developers:
http://blogs.sun.com/javaspotlight/
Of course this is biased and the Javaposse (http://javaposse.com/) is still the first address when it comes to Java podcasts.
We have exact plans for the next versions of Java:
http://blogs.oracle.com/javaone/2010/09/plan_b_wins.html
http://openjdk.java.net/projects/jdk7/features/
And we get exactly what we need next for NetBeans (the first-class Java IDE):
http://netbeans.dzone.com/nb-generate-simpler-rest
Related posts: The dawn after sunset, The programming language, Popular Java myths, Java applications on the desktop, The community.
2010-11-04
Self-Healing Linux
Until yesterday I only occassionally read about it, but yesterday I experienced my Ubuntu workstation self-healing from freeze.
I did not take care about how many applications were running and as doing a lot of things in parallel that all take quite a while (downloads in background, file copy operations and installations within virtual machines) I completely forgot about how many virtual machines were started. I was about to start another one when everything started to get really slow. I have 4 GB memory and a small swap partition (because during normal work I hardly get the memory of my Ubuntu full).
The memory at this time really got filled up completely and everything got veeery slow. I wasn't either able to switch windows. I was able to switch to the text console which also was very slow.
I wanted to login and kill a few tasks when I got messages broadcasted to my text console saying that the memory was exhausted. Shortly afterwards I got messages that the system identified Firefox and Thunderbird doing nothing and they were closed (maybe because just consuming memory at this moment). And finally it told that the last VirtualBox machine was consuming a lot of system resources and - was killed.
Flup, flup, everything was fine again. - And hadn't either time to do it manually. And it was doing exactly what I would have done manually - killing the last started virtual machine and closing the applications that were doing nothing (downloads were already finished).
That is simply awesome!
I did not take care about how many applications were running and as doing a lot of things in parallel that all take quite a while (downloads in background, file copy operations and installations within virtual machines) I completely forgot about how many virtual machines were started. I was about to start another one when everything started to get really slow. I have 4 GB memory and a small swap partition (because during normal work I hardly get the memory of my Ubuntu full).
The memory at this time really got filled up completely and everything got veeery slow. I wasn't either able to switch windows. I was able to switch to the text console which also was very slow.
I wanted to login and kill a few tasks when I got messages broadcasted to my text console saying that the memory was exhausted. Shortly afterwards I got messages that the system identified Firefox and Thunderbird doing nothing and they were closed (maybe because just consuming memory at this moment). And finally it told that the last VirtualBox machine was consuming a lot of system resources and - was killed.
Flup, flup, everything was fine again. - And hadn't either time to do it manually. And it was doing exactly what I would have done manually - killing the last started virtual machine and closing the applications that were doing nothing (downloads were already finished).
That is simply awesome!
2010-10-25
User lock down
In many big companies it is quite normal to lock down most features at the desktop that would allow the user to adjust settings and install software. Basically there are strong limitations for the user.
I can understand the reason, why this is done: To reduce security risks and to avoid users accidently change something that they don't know how to change back.
So the overall lock-down is a way to reduce required end-user support.
The other side of the medal is:
Because of the disadvantages I want to point out alternative options:
Related posts: Your holy machine, Why Linux?, Going Linux, The community.
I can understand the reason, why this is done: To reduce security risks and to avoid users accidently change something that they don't know how to change back.
So the overall lock-down is a way to reduce required end-user support.
The other side of the medal is:
- Every person is different and hence works in a different way. Hence the same configuration can't be the optimum for all users.
- Depending on the position different additional tools might be interesting for different users boosting their productivity.
- For the most users it is simply demotivating when they are locked down.
- Users will find workarounds for many lock-downs (e.g. using http://portableapps.com/ on a memory stick).
Because of the disadvantages I want to point out alternative options:
- Lock down only where and when necessary.
Some users - especially the beginners - either prefer a lock down. Those are the users who have some fear in front of a computer to mess something completely up. Other users have detailed know-how on how what operations to avoid and how to prevent themselves from viruses as good as possible. You could introduce a point-system or simply lock down a users PC only when a particular support time per month is exceeded.
- Set a time limit before pushing default configuration.
In some companies when a user calls for IT support, there is a maximum time of 15 or 30 minutes to solve the problem - otherwise the user gets a new image (with default configuration). That way it is ensured that your staff does not loose plenty of hours to solve end-user problems.
- Train the users.
Train them security and other basics (who to distinguish executable files from data files, where to never confirm approvals in popups, don't running executable attachments from emails, how to properly install and remove software etc etc etc). The user who is aware of the risks cautious and prudent.
- Migrate to Linux on the desktop.
Linux is more secure and less prone to break down after several software installations, uninstallations or upgrades. Also backup of local settings is easier. And it is a proven fact that Linux users require less support - even if they are no geeks! Everybody I talk to, who gives Windows and Linux support, tell me that and this is matching my own experience.
Related posts: Your holy machine, Why Linux?, Going Linux, The community.
2010-09-10
Shell scripting your desktop windows
In Java my limits are often what is offered in every OS - and what is inside the JVM.
One thing, I often needed in the past under Windows is to get the title (or other information) of the active window. I was using Windows API to get the information needed. Under Linux/Ubuntu things are different - and more complicated due to different desktop environments, I thought. And how to access those APIs with Java?
Well, it turned out, that I am completely wrong and once again was trapped in my old habits from Windows.
Linux offers commandline tools - not only to get window information - no, also to manipulate windows, move them to different desktop etc. As I could start an application on a different machine and showing the window on my machine, I can either see the machine name in window lists. A commandline program can be easily used from within Java and output can be retrieved easily. Awesome!
It again turns out, that Microsoft Windows is totally poor in commandline tools. However, I could implement a similar small exe that gets me the active window title through API on windows and does console output that can be retrieved by a Java application or can be used in batch files.
If you are a Linux user, have a look at the following commands:
With xprop -root you get exhaustive information about windows.
Using xprop -root | grep _NET_ACTIVE_WINDOW\(WINDOW\) you get the active window handle. You can then use the address in wmctrl -l | grep theaddress. The only quirks is that in wmctrl the address might have an additional 0 after 0x so you should search for the part after 0x only in the output of wmctrl.
One thing, I often needed in the past under Windows is to get the title (or other information) of the active window. I was using Windows API to get the information needed. Under Linux/Ubuntu things are different - and more complicated due to different desktop environments, I thought. And how to access those APIs with Java?
Well, it turned out, that I am completely wrong and once again was trapped in my old habits from Windows.
Linux offers commandline tools - not only to get window information - no, also to manipulate windows, move them to different desktop etc. As I could start an application on a different machine and showing the window on my machine, I can either see the machine name in window lists. A commandline program can be easily used from within Java and output can be retrieved easily. Awesome!
It again turns out, that Microsoft Windows is totally poor in commandline tools. However, I could implement a similar small exe that gets me the active window title through API on windows and does console output that can be retrieved by a Java application or can be used in batch files.
If you are a Linux user, have a look at the following commands:
- wmctrl
- xprop
- xwininfo
With xprop -root you get exhaustive information about windows.
Using xprop -root | grep _NET_ACTIVE_WINDOW\(WINDOW\) you get the active window handle. You can then use the address in wmctrl -l | grep theaddress. The only quirks is that in wmctrl the address might have an additional 0 after 0x so you should search for the part after 0x only in the output of wmctrl.
2010-08-02
Ubuntu 10.04 with docking station
After installing Ubuntu 10.04 I lost proper dual monitor handling for my use case. Basically I want to use my notebook on the road as is and in the office I want to put it into the docking station while having the lid always closed.
Ubuntu 10.04 Lucid Lynx does not handle this properly any more. I found out that also in 9.10 it was probably pure luck for me because I upgraded from 9.04 and had some fixed entries in the xorg.conf.
But anyway, the xorg.conf file is legacy and there is now gdm with xrandr.
I found out that many have the same use case as I have and have problems. Some solutions are particular to NVidia cards, but I have an Intel. Thanks to Nylex in the Linuxquestions-Forum + Ubuntu XOrg documentation, I could find the solution:
I had to put this into the file /etc/gdm/Init/Default before the line starting with /sbin/initctl (why the TV1 is there also look below in update 3):
What it does is: Using xrandr it probes whether an external monitor is attached (whether HDMI or VGA) and if so it turns off the LCD display of the laptop so it is not used. You need to adapt the resolution to fit your needs.
[Update]
I had some cases when (maybe because of timing at startup) it did not work (sometimes it worked after gdm service restart then but not always) added --preferred and --primary to the xrandr call above and I added the script below to System->Preferences->Startup applications:
[Update 2]:
It still did not work 100%. Sometimes when I moved the mouse outside the desktop it was gone so I found the laptop monitor being still active (so I could move windows over there and they were gone). I could immediately notice that after login when the workspace icons on the bottom right were displayed with increased width. It helped, explicitly turning of LVDS1 display (the laptop screen) in the script (see appropriate options in the updated script above).
[Rest of update 2 removed because of update 3]
[Update 3]:
It turned out that (occasionally) on boot it recognized another monitor to be active, after first being the LVDS1 it happened also being the TV1 (occassionally) - and I don't know why because I never attached a TV to it. Anyway, I have added the options to turn that off explicitly also to the scripts (see updated gdm default script portion and startup script above).
Apart from that sometimes occassionally a re-evaluation of available displays might be triggered by an application (e.g. opening and closing the laptop lid or manually calling xrandr without parameters or starting arandr or other applications when started the first time). Such a re-evaluation can also occur right after graphical login and then your initial gdm xrandr modifications are forgotten. Therefore it helps having that script at hand (I have it in /opt) and added to the gnome menu under "Other" to run it when necessary on demand.
Apparently also a timing issue remained (I assume that because of the fact, that the startup applications are started in no particular order and all asynchroneously). However, adding a line with "sleep 5s" to the script (only to the additional script and not to the gdm init!!!) helped. I currently use a 10 seconds delay (subject to change maybe) but in your case less might be sufficient or more might be required (for me 5 seconds was not enough to make it work each boot).
For you it might be necessary to switch additional displays explicitely off (as I did with TV1 for example).
Oh, and BTW:
In System->Preferences->Power Management make sure that lid closed action is "Blank Screen". I made a lot of tests with setting to nothing (only possible via gconftool-2) but after all maybe the above setting is the better one.
If in your case the resolution(s) are not set correctly you can do this here in this script also. Use man xrandr or look at https://wiki.ubuntu.com/X/Config/Resolution#Adding%20undetected%20resolutions for more information on the appropriate xrandr commands.
I noticed some flickering and login screen displayed in lower resolution, so I added --mode 1680x1050 to the xrandr call in both cases (in both scripts) which is the resolution of my external monitor.
Honestly, using this script, I feel better than previously on Ubuntu 9.04 or 9.10 where I got it to work by accident after moving around monitors in the config (although in reality they were positioned the other way round). As I found out today, I wasn't either the only one with that experience (see http://ubuntuforums.org/showthread.php?t=1110407) although I am used to be the non-standard user. - With my use case of having the lid closed all the time when in the docking station I am by far not alone as I noticed and some either downgraded to 9.10 because they did not get it to work.
Many, many thanks again to Nylex and the Ubuntu documentation team!
Related links:
Ubuntu 10.04 Lucid Lynx does not handle this properly any more. I found out that also in 9.10 it was probably pure luck for me because I upgraded from 9.04 and had some fixed entries in the xorg.conf.
But anyway, the xorg.conf file is legacy and there is now gdm with xrandr.
I found out that many have the same use case as I have and have problems. Some solutions are particular to NVidia cards, but I have an Intel. Thanks to Nylex in the Linuxquestions-Forum + Ubuntu XOrg documentation, I could find the solution:
I had to put this into the file /etc/gdm/Init/Default before the line starting with /sbin/initctl (why the TV1 is there also look below in update 3):
xrandr | grep "HDMI1 connected "
# 0 is returned on success
if [ $? -eq 0 ]; then
xrandr --output HDMI1 --mode 1680x1050 --rate 60 --primary --output LVDS1 --off --output TV1 --off --output VGA1 --off --output HDMI2 --off --output DP1 --off --output DP1 --off --output DP2 --off --output DP3 --off
else
xrandr --current | grep "VGA1 connected "
if [ $? -eq 0 ]; then
xrandr --output VGA1 --mode 1680x1050 --rate 60 --primary --output LVDS1 --off --output TV1 --off --output HDMI1 --off --output HDMI2 --off --output DP1 --off --output DP1 --off --output DP2 --off --output DP3 --off
else
xrandr --current | grep "TV1 connected "
if [ $? -eq 0 ]; then
xrandr --output LVDS1 --mode 1440x900 --rate 60 --primary --output HDMI1 --off --output TV1 --off --output VGA1 --off --output HDMI2 --off --output DP1 --off --output DP1 --off --output DP2 --off --output DP3 --off
fi
fi
fiWhat it does is: Using xrandr it probes whether an external monitor is attached (whether HDMI or VGA) and if so it turns off the LCD display of the laptop so it is not used. You need to adapt the resolution to fit your needs.
[Update]
I had some cases when (maybe because of timing at startup) it did not work (sometimes it worked after gdm service restart then but not always) added --preferred and --primary to the xrandr call above and I added the script below to System->Preferences->Startup applications:
#!/bin/sh
xrandr --current | grep "HDMI1 connected "
# 0 is returned on success
if [ $? -eq 0 ]; then
sleep 5s
xrandr --output HDMI1 --mode 1680x1050 --rate 60 --primary --output LVDS1 --off --output TV1 --off --output VGA1 --off --output HDMI2 --off --output DP1 --off --output DP1 --off --output DP2 --off --output DP3 --off
else
xrandr --current | grep "VGA1 connected "
if [ $? -eq 0 ]; then
sleep 5s
xrandr --output VGA1 --mode 1680x1050 --rate 60 --primary --output LVDS1 --off --output TV1 --off --output HDMI1 --off --output HDMI2 --off --output DP1 --off --output DP1 --off --output DP2 --off --output DP3 --off
else
xrandr --current | grep "TV1 connected "
if [ $? -eq 0 ]; then
sleep 5s
xrandr --output LVDS1 --mode 1440x900 --rate 60 --primary --output HDMI1 --off --output TV1 --off --output VGA1 --off --output HDMI2 --off --output DP1 --off --output DP1 --off --output DP2 --off --output DP3 --off
fi
fi
fi[Update 2]:
It still did not work 100%. Sometimes when I moved the mouse outside the desktop it was gone so I found the laptop monitor being still active (so I could move windows over there and they were gone). I could immediately notice that after login when the workspace icons on the bottom right were displayed with increased width. It helped, explicitly turning of LVDS1 display (the laptop screen) in the script (see appropriate options in the updated script above).
[Rest of update 2 removed because of update 3]
[Update 3]:
It turned out that (occasionally) on boot it recognized another monitor to be active, after first being the LVDS1 it happened also being the TV1 (occassionally) - and I don't know why because I never attached a TV to it. Anyway, I have added the options to turn that off explicitly also to the scripts (see updated gdm default script portion and startup script above).
Apart from that sometimes occassionally a re-evaluation of available displays might be triggered by an application (e.g. opening and closing the laptop lid or manually calling xrandr without parameters or starting arandr or other applications when started the first time). Such a re-evaluation can also occur right after graphical login and then your initial gdm xrandr modifications are forgotten. Therefore it helps having that script at hand (I have it in /opt) and added to the gnome menu under "Other" to run it when necessary on demand.
Apparently also a timing issue remained (I assume that because of the fact, that the startup applications are started in no particular order and all asynchroneously). However, adding a line with "sleep 5s" to the script (only to the additional script and not to the gdm init!!!) helped. I currently use a 10 seconds delay (subject to change maybe) but in your case less might be sufficient or more might be required (for me 5 seconds was not enough to make it work each boot).
For you it might be necessary to switch additional displays explicitely off (as I did with TV1 for example).
Oh, and BTW:
In System->Preferences->Power Management make sure that lid closed action is "Blank Screen". I made a lot of tests with setting to nothing (only possible via gconftool-2) but after all maybe the above setting is the better one.
If in your case the resolution(s) are not set correctly you can do this here in this script also. Use man xrandr or look at https://wiki.ubuntu.com/X/Config/Resolution#Adding%20undetected%20resolutions for more information on the appropriate xrandr commands.
I noticed some flickering and login screen displayed in lower resolution, so I added --mode 1680x1050 to the xrandr call in both cases (in both scripts) which is the resolution of my external monitor.
Honestly, using this script, I feel better than previously on Ubuntu 9.04 or 9.10 where I got it to work by accident after moving around monitors in the config (although in reality they were positioned the other way round). As I found out today, I wasn't either the only one with that experience (see http://ubuntuforums.org/showthread.php?t=1110407) although I am used to be the non-standard user. - With my use case of having the lid closed all the time when in the docking station I am by far not alone as I noticed and some either downgraded to 9.10 because they did not get it to work.
Many, many thanks again to Nylex and the Ubuntu documentation team!
Related links:
- http://www.linuxquestions.org/questions/slackware-14/automating-xorg-randr-turning-laptop-screen-off-if-external-monitor-is-connected-779386/
- https://wiki.ubuntu.com/X/Config/Resolution
- http://www.yolinux.com/TUTORIALS/GNOME.html#INITIALIZATION
- http://intellinuxgraphics.org/dualhead.html
Subscribe to:
Posts (Atom)