BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Rico Mariani on Why Visual Studio Isn’t 64-bit

Rico Mariani on Why Visual Studio Isn’t 64-bit

For a long time now developers have been asking why Visual Studio hasn’t made to switch to 64-bit. Rather than effort or opportunity cost, the primary reason is performance.

This may seem counter-intuitive, but shift from 32-bit to 64-bit isn’t an automatic win. While you can benefit from having access to more CPU registers, that mostly benefits applications that are doing heavy number crunching on large arrays. With an application such as Visual Studio that work with large, complex data structures, the 64-bit pointer overhead dwarfs the benefits of more registers. Rico Mariani of Microsoft explains,

Your pointers will get bigger; your alignment boundaries get bigger; your data is less dense; equivalent code is bigger.  You will fit less useful information into one cache line, code and data, and you will therefore take more cache misses.  Everything, but everything, will suffer.  Your processor's cache did not get bigger.  Even other programs on your system that have nothing to do with the code you’re running will suffer.  And you didn’t need the extra memory anyway.  So you got nothing.  Yay for speed-brakes.

He goes on to say,

Most of Visual Studio does not need and would not benefit from more than 4G of memory.  Any packages that really need that much memory could be built in their own 64-bit process and seamlessly integrated into VS without putting a tax on the rest.   This was possible in VS 2008, maybe sooner.  Dragging all of VS kicking and screaming into the 64-bit world just doesn’t make a lot of sense.

That isn’t to say Visual Studio can’t be improved. But Rico Mariani argues that the solution isn’t to give VS more memory, but rather make it use less.

Now if you have a package that needs >4G of data *and* you also have a data access model that requires a super chatty interface to that data going on at all times, such that say SendMessage for instance isn’t going to do the job for you, then I think maybe rethinking your storage model could provide huge benefits.

In the VS space there are huge offenders.  My favorite to complain about are the language services, which notoriously load huge amounts of data about my whole solution so as to provide Intellisense about a tiny fraction of it.   That doesn’t seem to have changed since 2010.   I used to admonish people in the VS org to think about solutions with say 10k projects (which exist) or 50k files (which exist) and consider how the system was supposed to work in the face of that.  Loading it all into RAM seems not very appropriate to me.  But if you really, no kidding around, have storage that can’t be economized and must be resident then put it in a 64-bit package that’s out of process.

Turning back to the question of more registers, Rico adds,

But as it turns out the extra registers don't help an interactive application like VS very much, it doesn't have a lot of tight compute intensive loops for instance. And also the performance of loads off the stack is so good when hitting the L1 that they may as well be registers -- except the encode length of the instruction is worse. But then the encode length of the 64 bit instructions with the registers is also worse...

So, ya, YMMV [your mileage may vary], but mostly those registers don't help big applications nearly so much as they help computation engines.

A frequent criticism of this stance is the shift from 16-bit to 32-bit applications. Developers in the mid to late 90’s universally hailed that change as beneficial all around. “So why don’t we see the same gains when going to 64-bit?”, is often asked. In a follow up article titled 64-bit Visual Studio -- the "pro 64" argument, he explains the difference.

It was certainly the case that with a big disk and swappable memory sections any program you could write in 32-bit addressing could have been created in 16-bit (especially that crazy x86 segment stuff).  But would you get good code if you did so?  And would you experience extraordinary engineering costs doing so?  Were you basically fighting your hardware most of the time trying to get it to do meaningful stuff?  It was certainly that case that people came up with really cool ways to solve some problems very economically because they had memory pressure and economic motivation to do so.  Those were great inventions.  But at some point it got kind of crazy.  The kind of 16-bit code you had to write to get the job done was just plain ugly.

And here’s where my assumptions break down.  In those cases, it’s *not* the same code.  The 16-bit code was slow ugly [word removed] working around memory limits in horrible ways and the 32-bit code was nice and clean and directly did what it needed to do with a superior algorithm.  Because of this, the observation that the same code runs slower when it’s encoded bigger was irrelevant.  It wasn’t the same code!  And we all know that a superior algorithm that uses more memory can (and often does) outperform an inferior algorithm that’s more economical in terms of memory or code size.

This lesson is applicable for most of the applications we write. If one is writing a computational engine or having to jump through hoops to manually swap memory, then shifting to 64-bit may be beneficial. But most of the time, staying with 32-bit and reducing the amount of memory being consumed will have a much larger impact for both the application and the operating systems as a whole.

Rate this Article

Adoption
Style

BT