|C# Guru - An Interview With Eric Lippert|
|Written by Nikos Vaggalis|
|Thursday, 10 April 2014|
Page 2 of 4
So typical tasks you could perform with Roslyn would be things like:
There is to my knowledge no plan for that sort of very dynamic feature in C#. However, there are things you can do to solve the simpler problem of generating fresh code at runtime. The CLR of course already has Reflection Emit. At a higher level, C# 3.0 added expression trees. Expression trees allow you to build a tree representing a C# or VB expression at runtime, and then compile that expression into a little method. The IL is generated for you automatically.
So let's talk a bit about VB.
Believe me, it would have been a lot easier to make Roslyn only about C#, keep VB in maintenance, not add any new features, and so on. But that's not the decision that we made. Rather, we chose to re-architect both compilers so that they would meet the needs of modern programming language consumers for a long time to come.
NV:Although, the TPL and async/await were great additions to both C# and the framework, they were also cause of a lot of commotion, generating more questions than answers:
What's the difference between Asynchrony and Parallelism?
EL: Great question. Parallelism is one technique for achieving asynchrony, but asynchrony does not necessarily imply parallelism.
An asynchronous situation is one where there is some latency between a request being made and the result being delivered, such that you can continue to process work while you are waiting. Parallelism is a technique for achieving asynchrony, by hiring workers – threads – that each do tasks synchronously but in parallel.
An analogy might help. Suppose you’re in a restaurant kitchen. Two orders come in, one for toast and one for eggs.
A synchronous workflow would be: put the bread in the toaster, wait for the toaster to pop, deliver the toast, put the eggs on the grill, wait for the eggs to cook, deliver the eggs. The worker – you – does nothing while waiting except sit there and wait.
An asynchronous but non-parallel workflow would be: put the bread in the toaster. While the toast is toasting, put the eggs on the grill. Alternate between checking the eggs, checking the toast, and checking to see if there are any new orders coming in that could also be started.
Whichever one is done first, deliver first, then wait for the other to finish, again, constantly checking to see if there are new orders.
An asynchronous parallel workflow would be: you just sit there waiting for orders. Every time an order comes in, go to the freezer where you keep your cooks, thaw one out, and assign the order to them. So you get one cook for the eggs, one cook for the toast, and while they are cooking, you keep on looking for more orders. When each cook finishes their job, you deliver the order and put the cook back in the freezer.
You’ll notice that the second mechanism is the one actually chosen by real restaurants because it combines low labour costs – cooks are expensive – with responsiveness and high throughput. The first technique has poor throughput and responsiveness, and the third technique requires paying a lot of cooks to sit around in the freezer when you really could get by with just one.
NV: If async does not start a new thread in the background how can it perform I/O bound operations and not block the UI thread?
No, not really.
Remember, fundamentally I/O operations are handled in hardware: there is some disk controller or network controller that is spinning an iron disk or varying the voltage on a wire, and that thing is running independently of the CPU.
The operating system provides an abstraction over the hardware, such as an I/O completion port. The exact details of how many threads are listening to the I/O completion port and what they do when they get a message, well, all that is complicated.
Suffice to say, you do not have to have one thread for each asynchronous I/O operation any more than you would have to hire one admin assistant for every phone call you wanted answered.
EL: It is useful any time obtaining the result of a computation is significantly removed in time from when the result was requested.
Suppose for example that you have a CPU-intensive computation to perform that will take several billion machine cycles. On a multi-core machine, there’s no reason for the CPU that is running the UI thread to block while it is waiting for the CPU running the computation to finish; the UI thread should continue to service the UI.
Or, suppose you are writing a game. When the player presses a button there is a siren, then three seconds later a door opens. That can be logically broken down into three tasks: start the siren, delay three seconds, open the door. During that three second delay, you still want the UI to be able to respond to other events. The delay and the subsequent door opening are logically an asynchronous operation; you want the thread to keep on working while it is waiting for the door to open. But it would be strange indeed to start one thread for the siren, one thread for the delay, and one thread for the door.
NV: Is that an OS feature that was always there, available only to low level programming but it's now given access to from high level programming as well ?
EL: Asynchronous I/O was always available to C# programmers; the Stream base class for example has ReadAsync and BeginRead methods for asynchronous I/O. However, using these methods often meant writing your code in a difficult, sort of “inside out” fashion. The Task Parallel library and the await feature in C# 5 make it a lot easier to write code that looks like traditional synchronous code, but is actually asynchronous.
|Last Updated ( Thursday, 10 April 2014 )|