Feb PSSUG – SQL Server and .NET CLR Integration

This month, Joe Smith will
present on SQL Server and .NET Common Language Runtime Integration and
Programming.  With the release of SQL Server 2005, and subsequently SQL
Server 2008, the ability to develop using the .Net Runtime has become an
intrinsic capability available to SQL developers.  

This talk will explore:

Basics of the
.Net architecture and its integration with SQL Server

Creation of
artifacts built on .Net for use in SQL Server (user defined functions, stored
procedures, etc.)

Security and
Access (how the integration

Deployment and
use of .Net artifacts to SQL Server

Best Practices in
utilizing the .Net integration, one size does not fit all.


Joe is a Solutions
Architecture Consultant with Dell Computer, Inc.  He has numerous years of
consulting experience working with customers across the landscape of small,
medium and large enterprises associated with varying industries.  His past
focus was primarily with middle tier and backend .Net development.  Recent
efforts with Dell focus on solutions built using .Net, SQL Server on or
associated with SharePoint.  Prior to joining Dell, Joe worked for
Microsoft Services in a similar role with a focus on both Enterprise and
application integration and .Net technologies such as Windows Communication
Foundation and Windows Workflow.


Date: Wednesday, February 24

Time:  12:00 PM – 2:00 p.m.

Venue: Pittsburgh Technology Council,
2000 Technology Drive, Pittsburgh, PA 15219

Cost: No charge, but you must RSVP. Space is limited!

RSVP: Online

Granting Table and View Permissions in SQL Server 2005/2008

Granting permissions to users (or roles) in SQL Server can be a pain, especially when there is a large number of tables and user/roles involved.  Faced with this task, I went looking for a script to do so, and found it at SQL Server 2005: Easily Grant Select to All Tables and/or Views in a Database.  The script is pretty good, but I made a few modifications to replace the cursor, use SQL2K5 system views and limit by schema.  This is an easy way to grant permissions for a database role to a specific schema.

Two system views are mentioned–sys.database_principals, which provides a list of users and roles in a database, and sys.schemas, which lists the schemas and the schema IDs for the database.  I'm not a big fan of cursors, I prefer to use a while loop.  I feel while loops are easier to write and perform better.  On a small task like this, the performance difference is negligable, but the ease of coding is nice.

Here is the query I ended up with:

 @login can be a specific user, or a role; can be obtained by querying sys.database_principals
 @schema_id = query sys.schemas for list of schema ids
declare @login varchar(50), @schema_id int
set @login = 'username'
set @schema_id = 1

declare @objects table (objid int identity(1,1), cmd varchar(500))
insert into @objects
select 'grant select on ' + name + ' to ' + @login
from sys.tables
where schema_id = @schema_id

select * From @objects

declare @oldid int, @curid int, @maxid int, @cmd varchar(500)
set @oldid = 0
select @maxid = count(objid) from @objects

while @oldid < @maxid
 select top 1 @curid = objid,
 @cmd = cmd
 from @objects
 where objid>@oldid
 order by objid
 execute (@cmd)
 select @cmd 
 set @oldid = @curid

Unable to cast object of type ‘Microsoft.SqlServer.Management.Smo.SimpleObjectKey’ to type ‘Microsoft.SqlServer.Management.Smo.Agent.JobObjectKey’. (Microsoft.SqlServer.Smo)

When creating a job in SQL Server 2005, if you get the following message:

Unable to cast object of type ‘Microsoft.SqlServer.Management.Smo.SimpleObjectKey’ to type ‘Microsoft.SqlServer.Management.Smo.Agent.JobObjectKey’. (Microsoft.SqlServer.Smo)

Make sure the version of your client tools and server are the same.  When you install SP2, the SP doesn’t automatically fix the client tools, too.  In my case, the DB was SP2, but the client tools weren’t.  Updating the client tools to SP2 fixed the problem.

DotNetKicks Image

SimMetrics – A Better SoundEx in SQL Server 2005/2008

Tasos Yalanopoulos has a great post about a replacement for the intrinsic SoundEx function, called SimMetrics, implemented as a CLR library that can be loaded into SQL Server.  The library has a number of methods for determining similarity metrics.

Tasos’ post is Beyond SoundEx – Functions for Fuzzy Searching in MS SQL Server, and contains a tutorial of how to load the library into SQL Server, and how to use it in queries.

DotNetKicks Image

Rectangles, Temporary Tables and Table Variables

This week, we’ve been working with SQL DBA Kate Luxemburg, who has 20 years of DBA experience going all the way back to the early days of Ingres.  Kate had a 90-minute braindump yesterday with all sorts of useful tidbits—it’s a great opportunity anytime you can learn from someone with that much experience.  Here are some take-away messages.

Think of DB programming as visualizing rectangles

Application programming and DB programming are two different skills.  All DB results are in the shape of a rectangle, whether it’s one scalar result or a large recordset.  DB programming is strictly about managing rectangles.  The smaller the rectangles you work with, the faster your query will execute.

Restrict, then project

Large SQL statements produce large rectangles, and then filter out data which doesn’t need to be there.  It’s sometimes better to start with three or four small rectangles (restrict), and join them all together at the end of a query to return to the calling command (project).

The very top of your query should define the primary recordset you want, and place that into your intermittant table structure (ITS—either a temp table or table variable).  Use this recordset in subsequent joins and queries to reduce the amount of filtering necessary.

When restricting, note that the WHERE clause is processed before the joins are.  Make smaller rectangles by using the WHERE clause, and the joins will perform better.

Table Variables or Temp Tables?

Think about the area of your rectangles.  Temp variables are great for smaller rectangles, but at some point, your memory will fill up, causing your table variables to be paged to the disk via the OS.  This point is different on every system, so some testing will be involved.  Using temp tables and a properly sized tempdb is more efficient than OS memory paging.

What’s the easiest way to measure the performance of two queries?  Use “set statistics io on”, which provides a great deal of information regarding the amount of IO it takes for a query to complete.  The less IO required, the better your query will perform.  Follow the link for more info on this setting.

Don’t use either unless you have to

In some cases, you can replace an ITS with a select case statement, and doing so will increase performance.

DotNetKicks Image

How to create a CSV or delimited list with an ID field in t-sql

A question in the asp.net forums wanted to return the following output:

ticket_id          assigned_to
1                       bsmith csmith dsmith

but was getting

ticket_id     assigned_to
1                  bsmith
1                  csmith
1                  dsmith

Since T-SQL (an query languages in general) lacks a lot of the array and concatenation features we take for granted in programming languages, you need to do a little trickery to make this work.  There are probably several ways to pull this off, but here’s what I did.

My first thought was the great creating a CSV list in SQL using coalesce example.  But, the tricky part here is that he wanted the ID of the ticket along with the list of the users assigned to the ticket, and you can’t assign a value and retrieve data in the same statement.  So that means we have to use some sort of updateable array (which is usually a table variable or temp table in T-SQL) and a loop (I’m not partial to cursors, so I used a WHILE loop).

The original question had two tables joined together, so I tried to mimic the structure for my solution.  The tables I used looked like this:

ticket_id     assigned_to_id
1                 1
1                 2
1                 3

user_id     user_name
1              bsmith
2              csmith
3              dsmith

And here’s the SQL I ended up with.

— table variable to hold intermediate results
declare @assignments table(ticket_id int, assigned_to varchar(100))

–couple of variables used in the loop
declare @assigned_to varchar(100),
    @ticket_id int

— populate the table var with a list of all tickets
— modify sql to suit your needs
insert into @assignments(ticket_id)
select distinct ticket_id from work_tickets
order by ticket_id

— get the first ticket w/o any assignment information
select top 1 @ticket_id = ticket_id
from @assignments
where assigned_to is null

— if anything was retrieved in the previous query
— when we reach the end of the table, @@rowcount=0 and loop will end
while @@rowcount >0

    — get all users assigned to the work ticket and join them into a space
    — delimited list
    — modify join and where to suit your schema
    select @assigned_to = coalesce(@assigned_to + ‘ ‘,”) + convert(varchar(6),user_name)
    from work_tickets_assignment as a
    inner join work_tickets as t on a.user_id = t.assigned_to_id
    where t.ticket_id = @ticket_id

    — add the user list into the table var
    update @assignments
    set assigned_to = @assigned_to
    where ticket_id = @ticket_id

    — important!! clear the list
    set @assigned_to = ”

    — get the next ticket w/o assignment info
    — loop will end if we’re done
    — this query should match the one outside the loop
    select top 1 @ticket_id = ticket_id
    from @assignments
    where assigned_to is null


— return the contents of the table var
select * from @assignments

This have me the exact result the questioner wanted.  It’s not the most scalable piece of code, but in a test on some other data I have, I was able to update over 800+ unique “tickets” with over 2000 “users” in under 4 seconds.  Like I said, not the most scalable, but it will get the job done for smaller data sets.

Free SQL Server 2008 E-Book

I wasn’t aware books have beta versions too!  At any rate, it’s free for now, warts and all.

Free e-book offer: Introducing Microsoft SQL Server 2008 by Peter
DeBetta, Greg Low and Mark Whitehorn Reviewed by Kathleen Atkins,
project editor, Microsoft Press

Peter DeBetta, the principal author of this book, welcomes readers to
the soon-to-be-released SQL Server 2008. An MVP for SQL Server and a
holder of many other credentials and longstanding SQL Server experience,
DeBetta (with his coauthors) offers in this electronic beta book a
discerning view of the new version of SQL Server.

DeBetta and his coauthors-Greg Low and Mark Whitehorn, who are also
database experts, consultants, developers, and writers-divide their
discussion of SQL Server 2008 into seven topics: Security and
Administration, Performance, Type System Enhancements, Programmability,
Storage, Enhancements for High Availability, and Business Intelligence
Enhancements. In each of these chapters, the authors describe new or
improved features. They could not address every additional capability of
or improvement in SQL Server 2008, however, because their view (and thus
their readers’ view) into the product is an early one. Given the goals
of the book-to introduce and explore the product in a preliminary way,
both conceptually and practically (with sample code)-readers will find
plenty to interest them.

For example, database developers might be very glad to know that T-SQL
remains part of the product, and is, as yet, “still most often the best
choice for retrieving and manipulating data.” CLR integration has been
improved since SQL Server 2005 and the new product introduces system CLR
types, but “T-SQL is still not going anywhere.”

Because Peter DeBetta hopes to be the first technical book author on The
New York Times’ bestseller list, he recommends Introducing Microsoft SQL
Server 2008 to everyone. But the people who really ought to read it are
those who want to begin using the new and improved tools in SQL Server

Register to download the free e-book, Introducing Microsoft SQL Server
2008 <

dbdesc bug and great support from Logica2

One of my favorite database tools is dbdesc from Logica2.  I’ve reviewed it in the past.  It’s simple, inexpensive and does its job quickly and well.

Recently, I’ve had some turnover on my team, in addition to the usual bouncing around, and database scripts became out of date with what was in the database.  I got the “all clear” to refresh a database, and ended up wiping out some recent changes.  I couldn’t recreate the tables and stored procedures from their creation scripts, but fortunately I had run dbdesc earlier and had its output to work with (you can run it as a command line tool and include it in your automated builds).  Dbdesc includes the CREATE scripts in its documentation, so I copied and pasted and executed and ran into a problem.

After a little poking around, I noticed that the tables were all being created as dbo.schema.tablename.  Upon closer examination (and this took a while to find), I saw that dbdesc was outputting the CREATE as


rather than


This small difference is what led to the creation errors.  Manually fixing the sql script solved my problem, and I reported the issue to Logica2.  This is where the tale of great customer support begins.

I sent in the bug report via their website at around 3pm on Friday.  About three hours later, I had a reply from Jose that he was looking into the issue.  Not a bad response time, and I wasn’t expecting any update until today (Monday).  At 11:12 pm Saturday, I get another email from Jose saying the issue has been fixed and a link to the new installer.  Very impressive, especially since it would be Monday before I was back at work and would need the software.  This was probably a very simple issue to fix, but nonetheless it received prompt attention on a weekend.  My previous build was 730, and the updated build was 737.

Thanks, Jose and Logica2, I love your products even more!