Page 1 of 2
There has been a long-running debate on whether or not multiple monitors - specifically at least two - improve programmer productivity but there are still some things worth saying.
Of course you might be of the opinion that programming is an expensive business and anything that speeds it up or even promises an outside chance of improving its efficiency is worth spending money on. If so you might not have to read on - but there are some interesting practical tips later on how to do it right.
The first thing to say is that it doesn't matter if multi-monitor improves efficiency, although it would be nice to be able to quote some figures to prove that it does.
What matters is that, to the programmer using the setup, it feels more efficient.
If you have never used a multi-monitor setup then don't judge until you have tried it for a few days. I guarantee that you will not give it up once tried.
When you work with a multi-monitor system you feel more organized and more able to get on with the work - with the one proviso that the machine running the show isn't letting the side down. That is:
adding a multi-monitor setup to an adequate machine has the potential to make things worse not better.
You have to have a firm foundation to build the system on. If the programmer is waiting while a debug window opens for longer than it takes to forget the debug step about to be initiated then it doesn't matter how many monitors are displaying the results - as the moment is lost.
It is also very important not to confuse multi-monitor with increasing display real estate. To have multiple monitors does increase screen real estate but it's not simply the number of pixels that make the difference but the compartmentalization. That is:
multiple screens are not the same thing as a single very big screen.
If you have a big screen it is indeed much better than a small screen - but there is a law of diminishing returns in operation. When you have a lot of screen real estate you can position windows so that you can see them and you can compare content and copy and paste without having to swap which windows are on top. But....
... the big problem is organization.
There are operating system gestures which can be used to place windows in different fixed locations but in the main you need to put some work in to arranging things to get the best result. Overlapping windows were a great UI idea but they aren't perfect once the stack of windows you are managing gets beyond about seven: Magic Number Seven.
In practice of course the number of windows that you can juggle depends on all sorts of factors - how quiet your workspace is, what length of short term memory you have, how distinct the tasks are, the variation in importance of the tasks and so on... What ever the number it will exist. There will be on any given day an optimal number of windows that you can manage on a single monitor no matter how big it is.
If you don't get it right then things can be worse than useless. Looking for lost windows is a problem even if you do remember to tab thought them or use some other short cut.
Anything that means you have to think hard about where your windows are means you aren't concentrating on the main task.
In research, sponsored by a monitor manufacturer, at the University of Utah came to the conclusion that:
- People using a 24-inch screen completed tasks 52% faster than people who used an 18-inch monitor
- People who used two 20-inch monitors were 44% faster than those with a single18-inch.
- Productivity dropped off when people used a single 26-inch screen.
Notice that there is a drop in performance when you move to a 26 inch screen the so called big monitor paradox - why?
My guess is that once the pixel real estate reaches such sizes the problem of organization becomes important.
In other words once you have enough space to display the number of windows that you can mentally manage then the size of the screen becomes irrelevant at best and detrimental at worse.
If you were hoping for some hard and fast research results then you are going to be disappointed. I for one cannot find anything convincing and programmer specific. (Please email and let me know of anything I've missed and I will add it to this article.) It is surprising that such an important topic hasn't been the subject of proper study.
So what is the key difference between big pixel displays and multiple monitors?
As I've already hinted its organization.
When you program you generally want to look at the code while running your application. One screen has the app and one the code. It is easy as possible. You drag the windows onto the monitors you deside on using for the job and its like working with two separate machines - only no remote debugging needed and you can copy-and-paste.
The monitors become categories of task, they are like pigeon holes in which you store particular types of information. It is tempting at this point to say that if you can work well with n windows on one monitor, irrespective of its size, then you can work with 2n windows on two monitors and mn windows on m monitors but... this is a claim too far. A simple scaling law may please the algorithmic mind of a programmer, but real life isn't that simple. Psychology indicates that "chunking" i.e. aggregating small items into larger groups, can enable us to remember more but it isn't a simple linear scaling.
If you have one monitor you can work well with n windows and two monitors enable you to work with a few more but not 2n.
The very act of categorizing tasks to monitors means that you can also switch your attention very easily. If you are following the progress of the app look to monitor two. If you suddenly have a flash of inspiration where the bug is hiding you can switch your attention mentally to debugging and physically to the same subject by looking at monitor one.
The monitors represent the task.
It is this monitor-based context switching that you miss the most when forced to go from a multi-monitor system back to a single screen that does everything.