The War At Microsoft - Managed v Unmanaged
Written by Mike James   
Friday, 06 April 2012

Microsoft's language universe is moving toward C++ and away from C#. What is the justification for this? Speed and efficiency obviously, but is this enough? The quiet war going on between managed and unmanaged code is complicated.

When .NET was introduced the key ideas were sophistication and the creation of safe code. The move to so-called "managed languages" had to be seen as part of the general trend towards abstraction. Slowly but surely programming languages have been moving away from being a reflection of the hardware that runs them to something that is both machine independent and higher level.

.NET was supposed to be the future in more ways than one. At the time it was introduced, C++ and any messy non-managed language was immediately demoted to second class status. Although you could write un-managed code within a managed environment, it was clearly not something you would be proud of and such code had to be marked with an "unsafe" keyword within the program. If a little bit of code that strayed outside of managed environment was "unsafe" what do you think an entire language like C++ was - extremely unsafe, insanely unsafe, really really dangerous?

You get the idea.

So why was/is managed code so "safe"?

The reason is that the language and the compiler is designed to keep you away from the machine. You can't gain access to any of the machine's resources, like memory, directly. You can declare a variable and the compiler will allocate the memory and it will make sure that you don't use any memory that it hasn't allocated.

What this means in practice is that the code generated is checked for things like array bounds and use of uninitialized variables. Of course, in protected code you couldn't use things like memory addresses and raw pointers are not allowed. Instead of pointers you make use of references which are abstract managed pointers which "point" at things without a machine address ever coming into the conversation.

What it is easy to forget now is how much resistance there was to managed code. Not only did C/C++ programmers worry about the inefficiencies introduced by managed code, but so did VB6 programmers. It seemed that, apart from a few enlightened programmers, no-one actually wanted the change to something that looked a lot like Java reinvented for Windows.

However, over time, the enlightened programmers who believed that managed code really was safer than unmanaged started to win the rest over. The fact that VB6 was discontinued and C++ became a second- class language in Visual Studio also helped - programmers with nowhere else to go eventually flocked to .NET.

Even though managed code seemed to have won, or at least to be winning, the day there was always a group of committed C/C++ programmers who thought that a language that was closer to the metal was worth the risk of "unsafe" coding. Unfortunately for the managed code crew, this was and is a powerful group including the systems and application programmers building Windows and the rest of the infrastructure.

Now with Windows 8, and WinRT in particular, we have the resurrection of C++. It is now not only permitted to code native style, it is actually being promoted as "cool".  WinRT is a real throwback to primitive days. It might be argued that the new COM is easier to use than the old, but it is still COM which is something .NET programmers thought was long dead. It may have had the stake of managed code stuck through its heart, but the C++ coders found a way to revive it.

So what is the logic in all of this?

There isn't too much logic, as it seems to be motivated more by political maneuverings inside Microsoft. However, there has to be a front of logic presented to the outside world and in this case there is only one obvious reason for choosing unmanaged over managed and this is speed.

If you are presented with two choices, a managed language and an unmanaged language, there  is no reason not to pick the managed language, which is obviously safer and working at a higher level of abstraction than the unmanaged. The only logical reason why you would choose the unmanaged language is that it is faster and more efficient in machine terms. Mind you, following this logic means you would probably have to pick assembler over C++, but there are programmers who do hold this point of view.

So, to make the case for the change, it is vital that unmanaged code is proved to be not only better now but also in the future. That is, there needs to be no doubt that managed code can catch up with unmanaged.

This is exactly what is going on at the moment. Herb Sutter, a well known C++ expert on Microsoft's team, has a blog post called

"When will better JITs save managed code?"

The answer, if you read the blog, is of course never.

The reason is a lot of technical stuff about the way JIT compilers can't really do optimization (this is a great simplification and if you want the whole picture read the blog). This is, of course, nonsense because being software we can generally arrange it to work how we like, as long as we are prepared to take the tradeoff loss in some other area.

However, the next part of the argument is that JIT compilers can't be as efficient because managed languages have so many sophisticated and expensive features that make them safe. To quote:

"First, JIT compilation isn’t the main issue. The root cause is much more fundamental: Managed languages made deliberate design tradeoffs to optimize for programmer productivity even when that was fundamentally in tension with, and at the expense of, performance efficiency."

This is the "keep it primitive and it will go faster" argument.

While this is of value in situations where there is no choice but to consider speed and efficiency as paramount, in say embedded systems and in systems programming, this goes against the flow of programming history. As time ticks on, programming languages become less-and-less connected with the hardware and more-and-more connected with the programmer. If they also become inefficient, then all we have to do is throw some more hardware at the problem.

Some have now panicked at the prospect of Moore's Law running out and of the failure to increased processor clock speed, something that has been made worse by the boom in the mobile market where processors are much less powerful. However, processing power is still increasing.

On mobile devices there is everything to play for and CPUs will become more powerful, if only to gain companies such as Intel a marketing edge over ARM. On the desktop, changes in programming languages that makes asynchronous and true parallel programming easier, provide throughput gains without having to return to lower level and more efficient languages.

Now Miguel de Icaza has waded into the fray with a response, "Can JITs be faster?" He explains how optimization can be performed for a JIT, just as well as for a batch compiler - it is all a question of balancing the time it takes to generate the code against the quality of the code.

For me this is the wrong argument. 

It is not a question of how good the compilers are, but of how good the languages are. I like C++ and use it in situations where speed and efficiency matter (in such cases I usually move to C for even more efficiency). However, for a great many applications I'd rather use C# for its features and in fact I'd rather use a dynamic language such as Python or Ruby. My time is more important than CPU time. 

Of course, I'd like the most efficient implementation I can find, but at the end of the day the argument doesn't come down to efficient batch compilation versus inefficient JIT compilation. It comes down to the way the programming language helps me write correct and hack-resistant code. C++11 is a really big step forward for C++, but to me it feels like a really big step backward for programmer-kind.

This isn't a "my language is better than yours" situation. At the moment there are niche areas where particular languages are probably best choices. It is more about the way Microsoft is currently dismantling, or at best de-emphasizing, managed code - and all, apparently, for some spurious argument about speed and efficiency.

More Information

When will better JITs save managed code?

Can JITs be faster?

Related Articles

Dumping .NET - Microsoft's Madness

Why your next language better be C++

Was .NET all a mistake?

 

espbook

 

Comments




or email your comment to: comments@i-programmer.info

 

To be informed about new articles on I Programmer, subscribe to the RSS feed, follow us on Google+, Twitter, Linkedin or Facebook or sign up for our weekly newsletter.

 

Banner

Last Updated ( Saturday, 07 April 2012 )