Bio Billy Hollis is co-author of the first book ever published on Visual Basic. NET. He writes a monthly column for MSDN Online, and is heavily involved in training, consultation, and software development on the Microsoft .NET platform, focusing on smart-client development and commercial packages. He frequently speaks at industry conferences such as Microsoft's PDC, TechEd, and Comdex.
As you can tell from the grey hair I go back a long way in the business and I learned BASIC in 1975, learned FORTRAN in 1973 and started professional software development in 1978 on a system called the Micro-data Reality System in language called reality basic, a predecessor of the PIC operating system. The PIC System was interesting in that it had a database that had an interesting delimited format and the database was actually exposed at the operating system level which simplified a lot of programming. It had quite a lot of pretty advanced concepts that were not worked into other system for 10 to 15 years later. So I've learned a lot of languages and written a lot of software over the years. In the .NET world I was co-author with Rocky Lhotka of the first book on Visual Basic .NET which was the only book available on Visual Basic .NET for about six months and as a consequence of that I was able to get involved in a lot of the early efforts such as the .NET developer tour, instructed people at Microsoft on .NET technologies for quite a while. In the last few years I've been running a consulting practice based on .NET, with my partner, and we do medium to large projects that require more advanced application of the leading edge concepts in .NET. So for example in 2002 we were doing the advanced smart client stuff that other people really hadn't realized was an important part of the .NET world. So I specialized a bit in smart client, user interface stuff and recently I've been doing a fair amount of work on workflow. The nice thing about that, and I worked on fourth generation languages in the early 90s too, so you get exposed to concepts that other people don't see as early, and that way when those concepts hit the mainstream platforms such as Microsoft, you have a better understanding of what they are good for. So for example the LINQ query stuff that we are doing now is not dissimilar in spirit to a lot of the query stuff available in the 4GLs that came out in the late 80s and early 90s. Having worked for a company that sold one of those I have a pretty good feeling about LINQ because it's giving me back some things that I haven't had for a lot of years.
n the entire time that I've been doing languages I think that the only dramatically new concepts I've seen were: moving to relational databases from the flat index structures that we looked at, at the time; moving from procedural based to object based design and development and moving from character based to GUI user interfaces and the event driven paradigm that came out of that. Everything else is almost just syntactic sugar or other kinds of frills around those 3 big transitions, and all of them took a long time. All of them took at least ten years for people to grasp what they were all about and why they were important and how to do them and every single one of them resulted in a dramatically new tool or product that became a mainstream product, because when you take dramatic things like that you can't really put them on existing products. There's a limit to what you can bolt on the stuff you've got now, before you get to the point where you have to rethink things. So John Lam in his Keynote is talking about Ruby and some of the concepts there and he was getting at the point that we are now reaching that point in the asynchronous world, the world of the web, the world of all the threading things that we can do now, multiprocessor machines, that our platforms demand a new move of that nature. We don't really know exactly what it is going to look like yet, but I think we are on the verge of another transition that's of the same magnitude as the others.
I tend not to use the word enterprise because there is a certain amount of over-emphasis on that from my perspective. And that comes out of the competitive nature of the industry. Java made their biggest impact in the enterprise space, so Microsoft responded by attempting to make a similar impact in the enterprise space. And they orient a lot of what they do around that space. That's ok and that space certainly is important, but I'm a little more of an advocate of the long tail theory which you look at ( I will try to do this right from the point of view of the viewers): you would take a graph and there are few companies that do a lot of development and then the curve kind of slopes down in terms of the size of the company and the amount of development they do. There's a gradually decreasing slope there. Companies like Microsoft, Oracle and Sun tend to work on that big part of the graph because in a short span they get a lot of the activity, but you shouldn't underestimate the stuff that's going on and what they call the long tail. Because when start adding all those little smaller companies you end up with a pretty big application space there too. There is a similar concept in what Amazon does, in that the strength of Amazon is not that they sell 2 million of the latest Harry Potter book. I mean they are happy to do that of course, that's in the big part. The strength of Amazon is that they make it practical for you to order from the long tail and that is in fact quite a lot of where their profit and volume come from; it is the fact that it's all of those books that only sell 10 or 15 copies a year and they can still afford to do it; they take advantage of the fact that there is a latent demand for that, for that kind of thing. In the software world I think there is big latent demand that is not being satisfied right now, in the long tail of software development. The tools are very much oriented towards enterprise use and the power that the enterprise folks need sometimes is over-kill for what the folks need down there in that long tail. And our tools haven't caught up with that yet. If we look at the tools we used to have, people deride Access and I wrote an article myself called "Abusing Microsoft Access", about people who abuse it, but it was suitable for a lot of those long tail applications, in a way that we don't have a tool today. And to me the biggest limitation in .NET is that it works well for me and it works well for the people in the long tail, and I tend to go a little further down from that. But it doesn't work well for the people the further down that graph that you go. And the focus of Microsoft has been in the enterprise space so much because they've kind of owned that middle space and I think it has been allowed to languish a bit. I'd like to see more attention given to the needs of developers in that space. If you are a developer in a small company you can't afford to be an architect and up on the latest language innovations, you really got to worry more about the business needs because there's only you and maybe one other person and so you have to be more a generalist which means that the more the details that you can hide, the more the plumbing that you can hide, the better off you are. And I think this is one of the forces driving this need towards the idea of a transition, to get away from some of the plumbing. I talked about this in a one of the blog posts I put up a while back. I discussed what I called the Home Depot effect of using .NET and in the comments people from the Java world chimed in and said they felt the same thing, that when you go into modern frameworks that are targeted at the enterprise and say I want to do some fairly simple action and you're trying to find out inside that framework what to do, the emotional experience is very similar to walking into a Home Depot. If you're trying to find some plumbing part and you don't exactly know what it's called and you don't know where it is and you wander around the aisles looking for it and if you are lucky enough you might find it, but maybe the directions on the package don't really tell you very much about how to use it; they assume that if you are there for that part you already know. The same emotional experience is to a great extent present in the complex frameworks we have today; you walk into the framework and you're just overwhelmed by how much is there and overwhelmed by the effort it takes to locate and learn to use the pieces that you need.
Yes that's a perfect example of what some of the patterns and practices guys are doing and addressing that tall part of the graph and ignoring the long tail. To be fair to them they construe that as their mission because obviously the bigger you are, the more you need patterns, the more you need consistency, the more you need functionality. You look at a block such as the Composite Application Block and the ramp-up time on it is severe. You have a large enough application or you have a long string of applications that need to have a certain level of consistency inside a business, then the Composite Application Block can make a lot of sense because you get to distribute that large ramp-up time, that time it takes to acclimate yourself to it, over a big amount of product. But if you're doing a modest size application you are down there in that long tail and you are working on an application that is 20 forms or something, a line of business app that is going to be used by a hundred users or something like that, the Composite Application Block is never going to be a good fit because you're going to have to learn too much, you're going to have to spend too long figuring out how to apply it to your circumstances. So I think that in some respects the Patterns and Practices blocks while there is a space in which they do very well and you get the advantages of tested, very functional code that does a lot, that they get over-sold a little bit as kind of a universal solution when they really are not. There is a space in which they work well. The Composite Application Block for example. I have been asking members of my audience:"Who uses it?", and if they say that they use it, I ask "How long did it take you to understand it?"; of the ones who successfully implemented it I think the smallest answer that I've gotten so far was a month. And it goes up from there, so you're talking about some severe time and you can only really afford to do that if you are in that tall part of the graph.
6. One of the things I've noticed and maybe you have some insight into it, is that there is the Drag and Drop data binding wizard stuff for a quick application, and there is Enterprise Library, is there something in the middle?
That's a good point. The data binding is oriented towards a simple, sort of low-end case but you still have to know some things, and actually because of that lack that there's Enterprise Library with fairly sophisticated things there - you can even use data binding with enterprise library I suppose - but I would say there is more than one gap. There is one gap in the extreme low end in that if I just spray some forms out with the data binding we have now, they'll work until I start trying to do some fairly sophisticated things and then I'm going to have to learn some workarounds for things that the data binding does not do quite as transparently or as automatically as I would like. The number of those things gets less with every generation of data binding, but there're still there. One of the things that is missing is the low-end Microsoft access type thing where you literally don't worry about the data binding at all. When did you worry about data binding doing a Microsoft access application? You never did. We haven't achieved that level of simplicity of data binding in the .NET world.
I think that having an Access type tool that would resolve into .NET code, would make a lot of sense; even if it were only just one way that you were able to produce the code and then work off in that world, that would be better than what a lot of people have to do now, because they just have to learn too much. And then when you step above data binding there is also a space in which I'm not sure there's an optimal solution, but there you see more of a tendency for people to develop their own. So that's the space I'm in. I have my own data-binding implementation. Richard Hale Shaw made a good point this morning in a panel that we did that you have to differentiate data binding as a technology in Windows forms or ASP.NET from data binding as a pattern. The one thing you never want to do is to have any sized application that is more than a dozen forms in which you write a lot of custom logic to begin moving things from controls into containers and back. There's just a recipe for a buggy, unstable application. You don't want to do that, you want to use a data binding pattern somewhere. Does that mean you use the data binding technology that is built in? It might. But in some cases such as my own I actually implemented the data binding pattern with my own components. And thereby I have more control, I get to target that technology more precisely at my needs and still having the benefits of not getting to write all that GUI code that is essentially plumbing and has nothing to do with the business logic of the application.
That's right. Having started very early in 2002, being one of the first people doing large scale forms developments (formed based development) in .NET, I rapidly realized what the limitations were at that time, which were less than the limitations today and having actually been dissatisfied with VB classic data binding, I had already written a data binding replacement in VB classic; it was much easier to write it in Visual Basic .NET of 2002 because I had full object capabilities to do it with. But yes, I brought that pattern over, implemented it and we've been using that now for four years and every time they improve data binding I go to my partner and say "Well maybe we should switch over to data binding because they've made it better" and he just shakes his head and says "We've got something that works and we control it and it does exactly what we want". If you come to me at a conference and you want to ask some detailed question on data binding I'm not the right person to ask because I've got my own and know enough about the built-in data binding just to judge about what it is able to do and use it in demos.
This is an extraordinary challenging time for people who want to stay out there on the edge and take best advantage of the technologies that are available. That doesn't mean the entire development community by any means, but there's a pretty good size chunk of it that attempts to stay out there (20% or 25%). And for those folks this is probably the most challenging time in our careers. If we look back to the last equivalent period, at least to the Microsoft world (most of my comments are in that context), if we look back to the 2001-2002 time period when we were getting ready for .NET and we were learning all the technologies that were in it, most of us had plenty of time to do that because after the Dot Com meltdown there wasn't that much to do that was any fun anyway; who wanted to do all that ASP nasty stuff anyway? So we had a fair amount of time on our hands that we could invest in learning how to take best advantage of these technologies. During that period I wrote a lot of books and I did a lot of training and spent a lot of time writing sample applications. That ended when production work began in 2002, and since then it has been an unbroken rise in demand for development in the .NET world. There was an article in the Wall Street Journal not long ago that stated .NET developer is one of the top 5 of in demand positions in the whole economy and I don't mean just in the tech industry, but in the entire economy in the States. The job of .NET developer is that much in demand. So if you are on the position of the people who are out on the leading edge, now you have to gauge the investment of time in new technologies versus the work that people are trying to hand you for current technologies. The money is in doing stuff right now, the fun and the investment in future potential and credibility is in understanding what's going on out there. It's a very difficult balancing act to try to understand how much time to respectively invest in those two things, especially when we've got such widely different areas. From my perspective the four big areas we have to be concerned about are the there pieces of WinFX, which are Windows Presentation Foundation- WPF, Windows Communication Foundation- WCF/Indigo and WF - Work Flow. Those are part of the next generation framework which we will call WinFX. And the fourth technology is building in that 4GL-ish query technology into the languages into C Sharp and Visual Basic, that's called LINQ, Language INtegrated Query. There are four pretty big technologies now that we all have to figure out and I think everybody is going to have to find a different path depending upon what their areas of emphasis are. For me working in health care and needing the very best user interface that technology can provide, Avalon is very important to me. And I expect to spend quite a bit of time looking at it. By almost an accident I've spent the last couple of years working on very sophisticated workflow systems and because of that I want to understand what Windows Workflow does for me. But I have to be honest about that one: I think it's a little lower on the totem pole for most people, because it's really just an engine and there's a lot of stuff around it that would have to be developed to maturity before people look at it. There's Indigo for data transport and yes, I'm interested but they don't have tool ready for it yet. And I'm not really interested in the plumbing aspects; I don't want to be editing XML to use it. That I'm willing to put off and then LINQ, I expect to wait until there is a project in which I'm doing fairly sophisticated data manipulation before I learn LINQ in any significant depth.
Avalon is based in a world of varying resolutions, varying sizes, varying aspect ratios. Avalon being vector-based completely in its graphics instead of bit-mapped, and having an engine that has intelligence to rearrange things, to match the scale and size of the current app device, means that, for example in the health care world, I can have an application that runs on something small that doctors or other clinicians carry around and it could run on desktops and it could run on big monitor that are on the wall that the doctor might use to interact with the patient. All those possibilities are there from the point of view of scaling the interface and presenting in an appropriate fashion for all of them. I think Avalon is the technology we would use, because I don't think any of our technologies today would do that very well and I don't see others on the horizon that will. The second thing Avalon offers is the ability to conceive a user interface in a three-dimensional coordinate space. People are going to see this and get used to it even more quickly that you might realize, because some of the UI and VISTA, the next generation Windows, use that Avalon technology to render parts of the user interface in a tridimensional way so that you reach in and get everything that you want. I think there is enormous potential to apply that to the health care world because health care works with some of the most complicated information structures of any industry and it demands such a high level of usability and ease of use. You're not going to get a doctor to sit in a class for two weeks to learn how to use something. That's simply not going to happen. If you produce a system that functionally does what you want, but does present the user interface to a doctor, which he believes is appropriate and easy to use then he simply won't use it. So now we have what I hope are the technologies that would allow us to satisfy that level of user by providing entirely new paradigms for how you navigate through UI, for how you present the information they need to look at. The ideas are already spinning out, of people I talked to and some out of me, on how we would use this technique. I think health care would be one of the places where it can be the most aggressively applied.
I think the key thing to what Vista allows in terms of that tridimensional manipulation is that you can orient things in such a way that every thing you'd like to get to, at least part of it is there, that you can see. And there is the possibility that the user interface would allow certain movements of the camera, so to speak. So depending on what you are doing, you might be looking around from a different perspective and that leads to some interesting possibilities. But that means that there are some interesting implications for the kinds of peripherals that we are going to use with these kinds of systems. I think that scroll wheel on the mouse will become much much more important because you have to have something to navigate 3D and the buttons simply give you the 2D, I mean you might impose that 2D on a circle of some kind, but you still need the ability to navigate in 3D, and to me, probably, the example of an application that has taken this on at a fairly new level and made it work, is Google Earth where you navigate around with a mouse but you use that wheel to zoom in and out; I think that's a very basic example of the kind of user experience that I think Avalon would allow a wide variety of applications to implement.
That's right and very few applications implement that, but I think we're entering an era in which all the Microsoft mice sold in the last 4 or 5 years have that capability. So in the Microsoft world we'll be able to move in and out and I find myself wondering what the Mac folks are going to do? When 3 interfaces really become the way things are done, I'm wondering what their adaptation to that is going to be. They've got some bright minds; I'm sure they'll figure out something, but I think they are facing a bigger challenge than we are because we have peripheral and input devices.
Right. I mean for all I know the Apple guys will move to virtual gloves or something.
14. Someone brought that up in a talk and I'm thinking, especially in the medical industry where I've done some work, and there's a system where they put the CAT scans and the RMIs and the X-rays all together and they build up a tridimensional model, let's say of a spine they are going to operate on. Is this a valid way to navigate in a 3D manner?
That's exactly how it is. I think we're just at the very beginning of the possibility of using those kinds of virtual mapping things where you take something that uses a hand or whatever and reaches into the 3D space. I think we're at the very beginning of that, but the foundation for that is a user interface technology that first of all is a bit-mapped, because you would never be able to do it to do it; it has to be vector based, and it has to have the capability of doing things in a 3D coordinate space, both of which are build in he Avalon.
I'm not sure that workflow is going to get any easier from the perspective of how you put it in place. I don't think that WF is going to make a dramatic difference there. What WF is going to do is to allow more consistency into the engine, so for example today when you use this talk, there's a lot of integration work around constructing a workflow in this talk, and when this talk changes its engine to WF at some point in the future, that integration work would still be there. But what the workflow engine allows is for Microsoft to make it much more feasible for different products to have an engine that allows workflow. And then the tooling around that is what would eventually start to simplify the aspect of trying to get the workflow inside an organization. So WF itself isn't the answer there. It's what WF allows other people to build that will be the answer there, and there's a limit even there to what tools can do. Because when you start defining workflows and especially any workflow that touches on business processes and you're trying to automate something that's done manually now, the amount of work you have to do and the amount of requirements gatherings and understanding of the situation and the amount of coordination with what the users expect, I don't see how you make that substantially less than it is today. And that is a big part of the workflow job right now. It's too much trouble to get the technology working and WF will start to help. And the things that are built around it will start to help. Trying to get a group of people to do something with a consistent standardized workflow has its own challenges that are completely independent of the technology that you use.
I think that is probably a good way of putting it. The WF engine, which is not a product it's just a namespace, a piece of WinFX and it doesn't really has a lot of built in capabilities to talk to. For instance, one of the capabilities that I will have to have the first time I use it in a real app is the ability to talk to a queuing engine, and I'm probably going to use service broker capability for that. That's not in the box there's nothing that just plugs in and says "attach this workflow to that queue", so I have to write that. Fortunately in the context of WF you only have to write it once. And what I write will be reusable by a wide variety of people, much more so than when I wrote this complex flow engine plumbing stuff 2 years ago for a company that was automating workflow. None of that is portable to the outside world. None of it is reusable. I mean I can make it that way if I worked on it and productized it, but in the WF case it's a core around which other much more standardized pieces can eventually begin to build an ecosystem of more pluggable components that simplify the production of that infrastructure plumbing.
I remember the summer of 2000, being in Orlando, seeing .NET introduced and being rather amazed at what it was capable of. And I felt like it would be the platform that we'd use for a minimum of 10 years, perhaps 15, and I've certainly seen no reason to change that for moving to the WinFX world. We are very much in need of an abstraction layer on top of that, conceptually similar to what Visual Basic provided for the old Windows API. Windows API was just too hard to use in the C world, and Visual Basic put an entirely new level of abstraction on top of it. But most of things you get out of a tool box, not something you wrote a bunch of code out of a template to do. We need a level of abstraction for the .NET framework that's similar to that and I think we'll see that although it is hard to predict exactly what it will look like or where it will come from. In the meantime Visual Basic and C Sharp just get better with every generation and they are the best tools I've ever used even though I recognize they are not the ideal tools for everybody in terms of that they require you to know a lot and I'm able to get more interesting things done, more innovating and more valuable things than I have ever had before. I think the gap between the people that are really able to use the tools and guys who just grind that code, that gap gets bigger every year with the technologies in .NET.