BT

New Early adopter or innovator? InfoQ has been working on some new features for you. Learn more

T-SQL Improvements in SQL Server 2012

| by Jonathan Allen on Mar 19, 2012. Estimated reading time: 5 minutes |

T-SQL received a lot of improvements in SQL Server 2012 including support for the ANSI FIRST_VALUE and LAST_VALUE functions, declarative data paging using FETCH and OFFSET, and .NET’s parsing and formatting functions.

Fetch and Offset

Currently SQL Server developers wishing to implement server-side paging tend to use imperative techniques such as loading a result set into a temp table, numbering the rows, and then selecting out the range they were actually interested in. Others use the more modern ROW_NUMBER and OVER pattern, while some stick to cursors. None of these techniques are not hard, but they can be time consuming and error prone. Moreover they are inconsistent, as each developer has their own favorite implementation.

SQL Server 2012 addresses this by adding true declarative support for data paging. T do this, developers have been given the ability to add an OFFSET and FETCH NEXT option to T-SQL’s ORDER BY clause. Currently these are not performance optimizations, SQL Server is still doing what you would have done manually. But as Dr. Greg Low says in his presentation, the people writing the query optimizer are in a much better position to improve things when they know what you are trying to achieve rather than how you are trying to achieve it.

Over Clause Windowing

Sometime developers need to write queries based on the difference between rows. For example, you may be interested in the amount of time that has passed between the current and previous row’s timestamp. This is easy to do with cursors, but that is largely frowned upon for both stylistic and performance reasons. One can also use a sub query that is executed row by row, but that can be an incredibly expensive way to get the results. Finally one can just punt the issue to the client, though that only works if the client is a programming language and not just a reporting tool.

Now you can directly access the previous row using the LAG function. Since you are explicitly declaring that’s what you are trying to accomplish, the query analyzer will retain that previous row in memory so that a sub-query is not needed, which in turn should result a profound performance boost. While LAG defaults to the previous row, an offset can be provided if you need to reach back further.

LAG, and its twin LEAD, are part of the ANSI standard and this is a feature developers have been asking for since Microsoft partially implemented the OVER clause in SQL Server 2005.

FIRST_VALUE and LAST_VALUE are also supported in this release.

Reflection

Previously developers wanting to determine what a query or stored procedure is going to return used the SET FMTONLY command. This allowed them to preview the columns coming back without actually executing the query. Unfortunately the information returned is limited to just the column definitions that you would have gotten back if you had executed the query.

With the new sp_describe_first_result_set procedure developers get detailed information on the first result set the query or stored procedure will return. Information includes data types and scales, source tables/columns, whether the column can be updated or is computed, and a wealth of other information. This is also available as the dynamic management views sys.dm_exec_describe_first_result and sys.dm_exec_describe_first_result_set_for_object.

Defensive Coding

Traditionally developers were at the mercy of their colleagues when it comes to calling stored procedures. With no compile time guarantees as to what would be returned, accidental breaking changes are a huge concern. While T-SQL doesn’t offer anything to prevent these mistakes, it can minimize them with the RESULT SETS option.

When specified, the RESULT SETS option allows developers to require the stored procedure return a specific data structure. If the result sets from the procedure differ in any way from what was requested, the batch terminates with an error. Since this is a run time error, we recommend developers using this option have a full set of unit tests to ensure the error is triggered before the code hits production.

Error Handling

T-SQL has had support for TRY-CATCH since 2005, but strangely THROW was missing until now. Without arguments, THROW works like it does in C# or VB inside a catch block. That is to say, it re-throws an exception without losing any of the information captured at the time. This is useful for logging or adding items to a re-try queue while still informing the application that something went wrong.

When used with arguments, THROW is similar to RAISERROR except that it supports error numbers not in sys.messages and the severity is always 16. Also unlike RAISERROR, uncaught THROW errors are always batch terminating.

Parsing and Conversions

T-SQL now supports a PARSE function the includes the option to specify a culture. The culture must be one supported by the .NET framework, suggesting how its implemented, and if available in a TRY_PARSE version as well.

Likewise, there is a new TRY_CONVERT function. Both this and the try parse function return a null if the conversion fails.

Going the other direction, the FORMAT function uses the .NET formatting strings. This is slower that native functions such as STR, but more flexible.

Date/Time Functions

While still woefully inadequate, T-SQL is slightly better at date/time processing. The EOMONTH function returns the last day of the month, a useful feature for reporting. The xxxFROMPARTS set of functions allow one to construct dates and times using a set of parameters instead of a single string. This includes support for the data types Date, DateTime, DateTime2, DateTimeOffset, SmallDate, and Time.

Misc. Functions

The Choose function from Access and Visual Basic has made its way into T-SQL. Under some circumstances this can be used as a less verbose version of CASE. Another function cribbed from those languages is IIF.

CONCAT can be used for string concatenation. Besides making it easier to port code from other database languages, this has different null handling that the + operator. Itzik Ben-Gan writes,

> The concatenation operator + yields a NULL on NULL input. The CONCAT function converts NULL inputs to empty strings before concatenation. Of course you can get by using the COLAESCE function, replacing a NULL input with an empty string, but this makes the code messy.

Rate this Article

Adoption Stage
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

While these are lovely.. by Roopesh Shenoy

I would have been excited to see some testability improvements like stubbing stored procedure calls. Even with a framework like TST, its' not very easy to write unit tests for Stored procs - in fact most of the time they start looking like integration tests due to other dependencies not being eliminated.

Re: While these are lovely.. by Mark N

That should tell you something about SPs in general. Use sparingly, if at all. They violate many programming principles.

Re: While these are lovely.. by Roopesh Shenoy

I know, but the reality is that a lot of legacy code is already there in form of SPs, and there is no immediate business value in rewriting them in the business layer (not to mention the risk of introducing new issues).

Either Microsoft should just stop improving them and say that they are deprecated (and we can just move on to write in better programming languages), or just make them more bearable. There is no point in adding more features to TSQL when basic testability is an issue.

Re: While these are lovely.. by Paul Hunter

Yes, in fact, why use a database at all? After all it's just a place where corporations store one of their most valuable assets. Why would a company want to have the scalability & reliability of stored procedures? After all, any C# or VB programmer can build such an elegant object model that there's no longer a need to have a well designed and normalized database.

Oh wait, it's the programs that get thrown in the trash & the data that lives on. The company I work for is on it's 4th version of the generation of application code and is using the same data for 20+ years. Yes, the application code has improved and been able to do some very interesting things. One of the biggest has been that it's able to save the data. Data is a constant in the business world and the best way, despite opinions to the contrary, to deal with it is in it's native environment and accessed via it SQL.

Some help, please? by Barbara Smith

Great article Jonathan! I am working on an executive overview that I need to present to my manager, can anybody point me to some other high-level discussions on this topic, I'm not a great writer (except when it comes to code :)) and I would like to see how some people describe it in a way that's easy for non-technical folks ... Thanks

Re: Some help, please? by Olive Williams

Hi, Barbara. Maybe this blog can help you: webcreek.com/2016/04/17/what-is-sql-azure/ I guess Jonathan wrote a fantastic article appropriate for technical folks, but there you will find more general things. Good luck!

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

6 Discuss

Login to InfoQ to interact with what matters most to you.


Recover your password...

Follow

Follow your favorite topics and editors

Quick overview of most important highlights in the industry and on the site.

Like

More signal, less noise

Build your own feed by choosing topics you want to read about and editors you want to hear from.

Notifications

Stay up-to-date

Set up your notifications and dont miss out on content that matters to you

BT