Bio Richard Hale Shaw is the founder of the Richard Hale Shaw Group, which has consulted and trained software developers since 1993. He's created and chaired numerous technical conferences. He specializes in consulting and training on .NET programming in C# and Managed C++. Richard is a member of the INETA speaker's bureau. He's been recognized by Microsoft as a C# MVP since 2004.
I'm a Microsoft C Sharp MVP and I have a training consulting firm based in Cambridge. We do training consulting with clients all over North America parts of Europe and occasionally in the Far East, mostly focusing in Microsoft technology space but occasionally venturing into other spaces; so the work that we do is not specific to Microsoft technology.
That question comes up a lot. We even get it in our courses because people will take a course and say "how do go and get other information". There are a lot of forces for it, depending on what level you're at. Microsoft has done a very good job of building very comprehensive documentation on the .NET framework and I guess that same person might say "Yes but it's so comprehensive I don't know where to begin". But you have to start somewhere. They have some very good tutorials that come with .NET that you can use to build small simple step-by step apps to kind of get your feet wet if you're just getting started. If you've already been through a formal training, you should come out knowing exactly what to do from there at least. I think people we compete against, who are in what we call the premium training space do that, I wouldn't waste my money on low end training. I won't give names but when you go to a lot of training sources that are just reselling the Microsoft training courses, the problem with those is instructors may have just learned the course the week before, may not really know the material or do a lot of real development with that material.
I would be reluctant in those areas. But I think if you go through a course of ours or our competitors, then you'd want to know where you would go next. One place you can go is the Patterns and Practices group at Microsoft. They've got a group that specifically works on guidance technologies for how to build certain types of application or how to solve certain types of problems that would be common in a number of applications and one of the products of that group is the Enterprise Library. This makes it possible for you to solve a lot of problems that you'd have to solve otherwise again and again and again. If you use features that are built into that library as a piece on top of that you add power to your applications. Even if you don't completely agree with what's in the content from Patterns and Practices group they have done an excellent job of trying to explain what they are trying to accomplish and what you utilise as technologies. It is a great place to learn about more.
3. One of the criticisms I've heard about the Enterprise Library is that it is an Enterprise Class Library. Are there any other things that you know about that provide similar benefits and yet aren't of that scale for the average production application?
It's a good question. I think part of the problem is this: something is going to be either large and comprehensive and unfortunately complex, or it's going to be smaller and easier to use in some respect because there's not as much you have to learn, but maybe won't solve as many problems and won't scale in the long run. What I have yet to do is really explore Entlib from the perspective of the framework design guidelines and see how well that team has actually adapted the guidelines that have come out of the framework group itself because part of the framework group makes the design guidelines and this is principally the book that came out by Brad Abrams and Christof of the CLR team, Brad is the leader of the CLR team.
These two guys wrote a wonderful book detailing what they use as design guidelines on how the framework was designed, how the classes are supposed to be designed, the rules they use for building stuff. It's not always propagated throughout the framework to do that because they began designing those standards as they were building the frameworks, so they know about some of the warts in the framework and they also point those out. They do a great job of saying "We really wish we had gone back and redone this or that because that's what we learned from the process, but here's how we're going forward today". They do a great job of explaining using the idea of scenarios to define how your own line of libraries should be build and how they work. In other words you look at who's going to use your library tools and you look at how you're going to use them. You think of this as an extension of requirements and analyse the requirements.
It is very practical. I wouldn't use the word pragmatic because it actually has another connotation. You look at how a developer is going to use a library you're building and you try to define the scenarios by which you're going to use it. So you often come up with scenarios that would involve a developer just wanting to get something done quickly. A more advanced developer on the other hand might really want to do something really comprehensive and more involved. You try to design your types and your libraries in such a way that they'll easily accommodate those simple scenarios which are often instantiate, set some properties, call some methods and get out of Dodge. If you think about it, 70% of framework programming you do just like that.
Then every now and then you have some class where you have to do much more work in order to get the thing done, but you've already made a decision to that point to do something that you know is more advanced than the simple scenario. The idea is to look at how you expect the developers to use your library, to find these scenarios and try to design the types in the library in such a way to accommodate those scenarios. So the developer wants to do the quick instantiate set and go and do so. The other more advanced developer has the other features they need to accomplish bigger things. My question would be - to go back to Billy's remark that you mentioned earlier about Enterprise Library - to see how well it really accommodates that? I don't think I've looked into Enterprise Library that way because I haven't been thinking that way. Now that becomes a challenge for me, to go and evaluate that. It is possible that what he is saying that they didn't actually build Enterprise Library that way. It is possible that they built Enterprise Library that way and it's not that obvious. That would be worth exploring. For me, I think this is a guideline for how you build your own libraries. The process is to create material for a new course called Advanced c Sharp specifically addresses this because when you've learned all the fundamentals of how to use .NET, you've learnt reflection, collections, you've learnt the higher level abstractions like Web Forms, Windows forms, Web Services and you want to know how to build bigger stuff or build the stuff you've been building in a more comprehensive fashion, in a more advanced way. There are all these other issues that come up. For best practices, how you solve certain problems and do this in a consistent fashion that's easily repeated with other material.
First of all if you look at the features of C# 2.0 the top feature that everybody knows about is a support for .NET generics. People get confused and call it C# Generics. Generics are a not specific to the C# Language. C# is just a language that lets you create generic types and methods and consume them. And it does so very elegantly, it also sometimes used as the poster child for Generics but you can do Generics with VB.NET, with managed C++ CLI. Any managed language eventually should be able to let you at least consume if not also create generic types and methods.
Then they added some stuff that is really more C# specific like anonymous methods and custom iterators. Other features like partial types, that's also in VB.NET and is not a complicated feature to add. Of all of those probably Generics to some degree, but even more importantly anonymous methods and custom iterators have deeper implications for what's coming out with C# 3.0. Let's make sure that everybody who's watching this knows that I've mentioned Generics mechanism creating reusable types where you omit type information from the definition of our class or method when you define it and somebody consuming the class or method supplies that particular type information that makes a particular class or method, much more reusable because you can use it with a variety of types. You get the type safety written they types in or hard wired the types in and you don't get any boxing or unboxing if the particular type you use when you consume it is a value type. It's definitely a major win and people really should investigate utilizing generic types as much as possible. Anonymous methods are a mechanism for creating what looks like a method inside of a method that can be referenced to a delegate.
You're inside of a method and somewhere inside of the code of the method you define what looks like the body of a new method but without a method name, hence the term, anonymous method. You just have a curly brace and a bunch of code and a closing curly brace and a semicolon and you can assign that to a delegate. But the advantage of that is that you can pass that delegate out of the scope where you define the method. What happens is the compiler takes that method our and he creates a new method in it of itself and gives it a name; it is anonymous only at the time you write the source code but it ends up getting a name at compiling time. That means because it is a real method it can be referenced and it simplifies a lot of the work in cases where you need to quickly create for instance an event handler or you need to have some code that at run time could go down a variety of different control paths and you need to actually wire up and compose a method on the fly. You could actually do things like that much more easily.
The third of those three features is custom iterators; that addresses the issue of the fact that prior to C Sharp 2.0 creating simple iterators, objects that implement the IEnumerator now the generic IEnumerator (of T) interface was really annoying. You'd have to put in place a lot of plumbing to make that happen. Custom iterators make this super easy and very powerful because you're just defining a method which has to return either an IEnumerator , or IEnumerable or their generic counterparts and has at least one yield return or break statement. What you're really doing in that method is defining the move-next logic of the IEnumerator or IEnumerator (of T) interface and at the point where you say yield return, you're identifying the point where a value has been obtained that would be returned by the current property of IEnumerator. So you just say: "here's how move-next is going to work: there's a value that would be returned by current move-next and then the next part of our code will execute. But it's not re-entrance it's not using multi threading; they effectively create a state machine behind the scene that understands what went on in that method you originally created and every time you say yield return, the state machine stops, sets the value aside, current can be used to retrieve the value; when you resume execution by calling move-next again as the caller of the iterator, the custom iterator code will continue execution with that state machine. It is very clever. The later of the two, anonymous methods and custom iterators have tremendous implications for C Sharp 3.0.
The beauty of anonymous methods is their power, the syntax for creating anonymous methods a little bit awkward. There's a little bit of pluming more than you should have do to create them. C Sharp 3.0 is going to have this mechanism called lambda expressions and has a very concise syntax for what becomes a new anonymous method, an elegant way of using these guys. Then the lambda expression becomes the language for most of the new operators of features that are added with linq, dlinq, and xlinq you've heard about.
Custom iterators can be used at all sorts of points under the hood and they will also be used in those C Sharp 3.0 features such as expressions and expression trees because if you think about it, if you could use one of the C Sharp 3.0 features to do effectively a sql select. If you've looked at any of the C Sharp 3.0 stuff it says you could do a sql select on anything that implements IEnumerator or IEnumerator (of T), it might be restricted to IEnumerator (of T) since this is going to be version 3.0 or later. Custom iterators can be used under the hood for example to implement those select and retrieve information because it says you can select on any collection although there's going to be a custom iteration doing some of that under the hood via an anonymous method as a lambda expression so those become very powerful new abstractions built on top of what they already added in C# 2.0.
In some ways you can make the case that they wanted to make some of these things out there. It's almost that they said let's get these things out and use them today in C# 2.0, but by getting them out there and seeing how effectively they're being used we can better fine tune our plans for harnessing and leveraging them with a higher level of abstraction in C# 3.0 because with some of the features in C# 3.0 you may never go back and use the old anonymous methods in a straightforward method anymore. You could and there's nothing wrong with that, but you may find that it is just easier to leverage them at the higher level. Some developers wouldn't even have to learn anonymous methods.
6. Another thing that I've heard of is extension methods. If I have a static class that's already has my stuff in it, why would I take that and extend the string class with it, why would I ever want to extend the runtime?
It addresses the issue of that for many reasons: you may be unwilling and unable to derive from an existing class extend it in a conventional fashion. If it's sealed you can't derive from it and override a virtual. You may be in the situation where you don't have the source codes, you can't modify the original class you've got the assembly so you can derive from sealed class etc etc. You may not even want to derive from it for a number of other reasons. The objective is to be able to easily make an existing class more adaptable to your needs without necessarily having to extend it in a conventional fashion.
I was working on something recently where I was constantly writing some code to copy contents of a collection out to an array using the built-in CopyTo method that has now been updated in ICollection (of T) to take T as the type of the array , so it's type-safe, therefore it is good. But then you have to do all the work of actually allocating the array, copying its contents, and initialising it and then call the CopyTo method. I thought I should write a useful library of my own that just takes care of that and you pass the object in as a parameter and it takes care of the details of allocating array, copys in the contents and then returning back the allocating array with the contents in it. Instead of writing three lines of code again and again I make one method call. The problem is that you have to expressly pass that collection object to my library method.
The value of an extension method in C Sharp 3.0 is that I can take that library method and define it as an extension method so that I can use it as if it were a part of that collection class. In fact I can define so that what it takes anybody that implements it as I (of T) and consequently be able to pass in any number of different collection types. When I use it as an extension method I'll be able to say: collection.object.mymethodname and not have to pass a parameter because implicitly a compiler knows the first parameter is going to be the actual collection object and that's just done transparently. So the idea is to make it more seamless, easier to use and to adapt these helper methods like that. The extension method mechanism is also being used in the rest of C# 3.0 to adapt these other features to existing types so that they don't have to rewire those existing types themselves.
There are very wide implications in being able to take a new method of your own and effectively treat it as if it were an instance method of a class and that's what extension methods do for you. So it is a very powerful new feature they are adding to the language. The reason I mentioned that example is that I've got a use case that screams extension methods that I didn't realize I had when I wrote it because I wrote it because I wrote it back in the C# 2.0 beta and once I saw C# 3.0 I knew how I was going to use it in the feature because it will simplify the way I write code with utility methods like that.
C# 2.0 anonymous delegate/closures less than satisfying
Given the fiasco of closures in C# 2.0, am going to lobby the Java community to not bother going down the same problematic path. Just stick with Java's anonymous inner classes and leave it at that. Java folks have a great alternative - the Groovy JVM language. The use of closures in Groovy turns out exactly as one would hope. It's the dominant cool feature of Groovy.
We should not make the same mistake as the C# community where C# has been turned into a public works project for language designers - with result that C# is becoming the PL/1 of this decade. We should basically lock Java down and go forward with Groovy as the next generation JVM-based language. Groovy is just the right amount of embellishment to where any of the goodness of Ruby can be enjoyed whereas the close association/compatibility to Java, the Java class libraries, the JVM, and Java developer infrastructure makes Groovy an immediate serious enterprise development language.
C# extension methods vs. AspectJ
I basically told him that AspectJ, in my view, was the ultimate patching language for Java-based code. With AspectJ one can take an off the shelf, third party Java library, or an application server like JBoss, or any Java code in the Java runtime class library, and apply any manner of fix or embellishment to that existing code. This can be done without having to download all the source code and figure out how to get successful builds of the whole monstrosity (which can be a huge time waster). With AspectJ one just downloads the specific source code files that need to be patched (to use a reference guide), apply the aspects introducing the fix or new embellishments. Every dependent application at runtime will then get to enjoy the adjusted implementation.
Peter made mention of extension methods - but I don't see these as being powerful enough. You really need the full power of an AspectJ-like solution if you start going down this path of patching existing code from the outside.
However, it is good that AspectJ is a separate language from Java. Only certain expert developers need to do patching. Most developers don't need these kind of capabilities cluttering up the language they use every day.