June 28, 2007 at 3:42 AM
As I mentioned in a previous post, I'm relatively new to using SubSonic. I'm working on a web application and the lure of not having to write a ton of boiler plate database access code lead me to use this technology. Recently, I encountered a problem with it, however, and though I'd pass on what happened and the solutions I found.
I had been happily using subsonic within the workflow of the normal Asp.Net page lifecycle. I've even been using some callbacks from pages with no problems at all. It was when I started adding some Asp.Net Ajax functionality to the site that I encountered problems. I set up a call to a webservice from the browser. Within this web service method, I was using SubSonic to populate one of the Active record classes it built and was, then, passing that class back as the return value. When I executed the code, I received an error message saying that an instance of the class I was returning could not be created because one of its properties was inaccessible.
After some investigation, I discovered that the problem was occurring when the object I was returning was being serialized to be passed back to my page. When SubSonic generates the active record classes for you, it generates a property for each column on the table that the class corresponds to. If that column is nullable, the property's type is set as a nullable type, as well. This makes perfect sense to me. However, the XmlSerializer cannot serialize nullable types. This means that if your table has nullable columns in it, the class generated by SubSonic cannot be serialized.
So, I modified my table such that none of the columns were nullable, regenerated my classes and tried to execute the code, again...same error - different property. This time, it seems, the offending property was one that is defined in the AbstractRecord class from which my generated class derives. The "get" was marked as protected and, as such, could not be read by the serializer. I had found some posts in the forums on the SubSonic site that pointed out the problem and the recommendation was to modify the class in the SubSonic source code, recompile the project and then regenerate your classes. Thankfully, though, the problem had already been addressed by the latest version (2.0.1a) of the project and this change is not necessary if you upgrade.
It was a scary moment for me, but now that I understand the limitations of the XmlSerializer in regard to nullable types I can code around the issue and not encounter this problem again.
June 25, 2007 at 4:41 AM
OK, I admit it. When the Xml support first started showing up in Sql Server, I was a skeptic. I wondered why anyone would want to do all that parsing and string manipulation on the database tier. It made me wince to think about it. So now, a few years later, and I'm wondering what I ever did without it. Especially since the Xml data type has been added to Sql Server 2005.
As an example of its usefulness, consider a recent situation I found myself in. I had a collection of objects that represented changed that needed to be made to the state in the database. I could have iterated through the collection and called a stored procedure for each object and applied the change sets one at a time, but I tend to try to make any call across the network as chunky as possible and thought how nice it would be if I could send the entire collection of changes to the database at once. It's entirely possible to do just that with the Xml data type.
First, I serialized each of the objects into Xml and gathered the individually object's state into a single Xml document. As an illustration, let's suppose the Xml looked like this.
I can pass this xml into a stored procedure as a string where the proc takes a single parameter of type Xml.
CREATE PROCEDURE [dbo].[MySp]
Inside the stored procedure I can parse through the Xml and place the data it contains into a temporary table that I can use any way I like.
SELECT ItemData.row.value('Service_Order_Id', 'int') as Service_Order_id,
ItemData.row.value('OTC_ID', 'int') as OTC_ID,
ItemData.row.value('Amount', 'money') as amount
FROM @OtcXml.nodes('/occs/occ') ItemData(row)
Starting with the FROM clause, I've basically specified an Xpath expression to the level within the XmlDocument which corresponds with the level at which my "rows" are defined and I'm saying that each element returned by the Xpath expression is to be treated as a row and each element below that level should be treated as a column in that row. In the select statement, I'm specifying each column by telling Sql Server the name of the element within the row I want the value from and I'm specifying the data type to cast the data as.
As I said before, I'm pulling the data out of the Xml into a temporary table so that I can run set-based Sql against the data to perform the updates I needed.
June 18, 2007 at 3:47 AM
Sql Server 2005 brought a couple of great features to the table in regard to paging...CTE's and RowNumber(). Paging, of course, is returning a a resultset in chunks (pages) so that you don't have to return a large number of rows all at once. For example, in a web application there may be a search page that allows users to search for orders by a date range. There is always one user who wants to list every order the company has received since the beginning of time. Without paging, every row returned by the search would be returned to the web server from the database and them passed on to the browser and displayed in a mile-long web page.
It's possible to accomplish paging on the web server (much more easily in .Net 2.0 than in previous versions thanks to the inclusion of paging in several of the bundled web controls). In this case, the large result set is persisted on the web server and sent to the browser a page at a time. Using this approach, you take two sizable hits; bringing all the rows across the network from the database and storing them in memory (or possibly on disk) on the web server.
Ideally, we'd like to be able to retrieve just a page at a time from the database and display it to the user. Sql Server 2005 makes this an easy task. Here's an example.
DECLARE @PageSize INT, @PageNumber INT, @TopRow INT, @BottomRow INT
SELECT @PageSize = 15, @PageNumber = 2
SELECT @TopRow = ( @PageNumber - 1) * @PageSize + 1,
@BottomRow = @PageSize + (@PageNumber - 1) * @PageSize;
WITH Orders AS
SELECT Order_Number, Customer_Id, Order_Amount,
ROW_NUMBER() OVER (ORDER BY Customer_Id ASC) AS RowNumber
SELECT RowNumber, Order_Number, Customer_Id, Order_Amount
WHERE RowNumber BETWEEN @TopRow AND @BottomRow
ORDER BY RowNumber ASC;
In this example, we begin by declaring the variables, we'll need. Note that the page size and page number could and normally would be passed into a stored procedure containing this code. Next, we calculate the top and bottom rows of the page we're interested in displaying. These values are indexes into the resultset as a whole that represent where our page starts and where it ends. Now, we create a CTE named "Orders". I think of CTE's as temporary views. We define The CTE and then select from it. In our CTE definition, we define the select statement which is our view into the data (or in other words the select statement that would return the entire result set we're concerned with) and add a column to the statement based on the Row_Number() function. Basically, we specify a sort order and the Row_Number() function will assign a row number to each row in the resultset based on the sort order we specify. With the CTW defined, we can query it and return only the rows with row numbers that fall between the top and bottom row numbers we calculated earlier.
June 11, 2007 at 11:00 AM
I recently started working on a personal project and decided to give Subsonic a try at the same time. Subsonic is a DAL generation tool and it simply rocks! I spent maybe an hour or so watching the helpful videos provided on the site to learn the basics of setting the tool up, configuring it in the .config file and wiring it up to visual studio as an external tool. After it was set up, I was generating data transport/business classes and data access code from my database in no time. Subsonic generates business objects and collections that know how to retrieve by criteria and how to save themselves. Stored procedures are supported, as well. While I'm not yet a big fan of the query tool, it can be used to generate queries in code without resorting to managing dynamic sql as strings. I tend to be old school in some ways and still prefer to use stored procedures to manipulate database. Even so, Subsonic offers me the ability to call any stored procedure on my database with a single line of code. It supports Sql Server, MySql and Oracle. Again, I love being more productive. Subsonic helps me be as productive as possible by relieving me of having to write as much code.
June 11, 2007 at 10:14 AM
I have several bits of code that I take from gig to gig and reuse. I wrote a general data access layer a few years ago and have used it numerous times. It's saved me untold hours of time. Over time, we craft these indispensable little gems that we can plug into our applications over and over again. I'm definitely a fan of getting to the finish line as quickly as possible and keeping an organized collection of reusable code snippets helps to accomplish that. I, personally, prefer to keep these pieces of code in a code repository on a server at home. I have my repository set up so that I can access it from anywhere across the internet and when an opportunity to use one of these snippets arises, I pull it down and put it to the task. As far as organization, I find that arranging the code by Layer and then by purpose is sufficient.
I recently read a book on building wealth in which the author indicated that the amount of money you're making is not the important thing when building wealth. The acquisition of assets that generate income...and keep on giving...was the key. No matter how many lines of code you can write in a day, code assets will save you time and make you more productive.
June 7, 2007 at 8:24 AM
I’ve read, numerous times, the concepts surrounding the “null” value from a database perspective. “Null isn’t a value but rather indicates the absence of a value”. Null means unspecified or unknown. There’s a good deal of flexibility and power with null.
With a string variable, you can initialize it to null and later check to see if it has been initialized by checking for a null value. If you initialize the variable to an empty string, when you later check the value, you have to make a judgment call as to what an empty string means. Does it mean that the variable has been set to an empty string or that it hasn’t been initialized, at all? Initializing the variable to null allows you to make a distinction between “nothing has been specified” and “a value was specified, but it was nothing”.
You can set a string to null, but what about an integer? By default, an integer is initialized to zero. So, the problem described, above, applies. If the integer variable has a value of zero does that mean a zero was specified or that nothing was specified at all? By using nullable types, it’s possible to get the same behavior from an integer, datetime and so on as you do for a string.
By defining your variable as a nullable integer like so
int? integer = null
Dim integer As Nullable(Of Integer) = Nothing
You can get exactly the same behavior as with the string. This is especially useful when working with a database. If you have an integer column in the database that’s mapped to an integer field on a business object. How do you handle assigning the value if it’s null? You probably set the integer field to 0, in this case. I’d argue it’s more correct to use a nullable integer type for the field and set it to null.
June 6, 2007 at 8:52 AM
Welcome to the new Enterprise Etc web site. If you're looking for the old site with my articles and original blog, you can still find that content archived here.