Tagged: Sql Server

Fast & Simple Data Access using Dapper.net

Dapper is a micro-ORM. It helps you to create a strongly typed result by providing you an easy way to work with your database. It is simple to use, it does simple stuff, essentially extends your IDBConnection interface. So if you are looking for a way to query your data real fast (yes, this is the fastest it gets barring pure ADO.net), you should checkout Dapper.net. It is developed by some smart folks over at Stackoverflow.com ! (Another good reason to try it out!)

Installing Dapper

It is simple to install Dapper, you could simply use Nuget Package Manager to add Dapper to your project. From Visual Studio, Click on Tools > Library Package Manager > Package Manager Console
Type in Install-Pacakage Dapper and hit Enter

In order to demonstrate Dapper, we will use the standard Microsoft sample database NorthWind.

In NorthWind database Customers table represent customers and Orders table represent Orders placed by all the customers.

Here are some POCOs we will use:

    internal class Customer
    {
        public string CustomerId { get; set; }
        public string CompanyName { get; set; }
        public string ContactName { get; set; }
        public IEnumerable<Order> Orders { get; set; }   
    }

    internal class Order
    {
        public int OrderId { get; set; }
        public DateTime OrderDate { get; set; }
    }

A useful helper class that would get us a valid, opened SqlConnection object

    internal static class SqlHelper
    {
        public static SqlConnection GetOpenConnection()
        {
            var connectionString = ConfigurationManager.ConnectionStrings["NorthWind"];
            var connection = new SqlConnection(connectionString.ToString());
            connection.Open();
            return connection;
        }
    }

And a Data Access class that actually does some work !!

    internal class CustomerDao
    {
        public IEnumerable<Customer> GetAll()
        {

            const string sql = "Select * from Customers";
            using (var connection = SqlHelper.GetOpenConnection())
            {
                return connection.Query<Customer>(sql);
            }
        }

        public IEnumerable<Customer> GetById(string customerId)
        {

            const string sql = "Select * from Customers where CustomerId = @id";
            using (var connection = SqlHelper.GetOpenConnection())
            {
                return connection.Query<Customer>(sql, new { Id = customerId });
            }
        } 

        public IEnumerable<Order> GetOrders(string customerId)
        {
            const string sql = @"select o.Orderid, o.OrderDate from Customers c inner join Orders o 
                                 on c.CustomerId = o.CustomerId and c.CustomerId = @Id";

            using (var connection = SqlHelper.GetOpenConnection())
            {
                return connection.Query<Order>(sql, new { Id = customerId });
            }
        }


    }

Above class illustrates some very a straight select and some simple joins. You can use the same strategy to run your insert and update queries. Dapper also supports multi-mappings where you can auto map your aggregates and multiple result sets also. For more visit Dapper page on Google code: http://code.google.com/p/dapper-dot-net/

Here is some tests that actually tests the code above:


    [TestFixture]
    public class DapperTest
    {
        private CustomerDao _customerDao;
        [TestFixtureSetUp]
        public void TestFixtureSetUp()
        {
            _customerDao = new CustomerDao();
        }
        [Test]
        public void CreateConnection_ReturnsNotNull()
        {
            var connection = SqlHelper.GetOpenConnection();
            Assert.That(connection, Is.Not.Null);
        }
        [Test]
        public void GetAllCustomers_Returns_Records()
        {
            var allCustomers = _customerDao.GetAll();
            Assert.That(allCustomers.Count(), Is.Not.EqualTo(0));
        }

        [Test] 
        public void GetCustomers_With_Where_Clause()
        {
            var testId = "BERGS";
            var allCustomers = _customerDao.GetById(testId);
            Assert.That(allCustomers.Count(), Is.Not.EqualTo(0));
        }

        [Test]
        public void GetOrders_For_Customer()
        {
            const string testId = "BERGS";
            var orders  = _customerDao.GetOrders(testId);
            Assert.That(orders.Count(), Is.Not.EqualTo(0));
        }



    }

Massive and Peta Poco are two other micro ORMs that competes with Dapper. You can find Massive here => https://github.com/robconery/massive and Peta Poco here => http://www.toptensoftware.com/petapoco/

(I haven’t tried Massive, but have played with Peta Poco. I find auto mapping features are better in Peta Poco compared to Dapper)

I hope you enjoyed this post! Happy Programming!
Cheers!

A broad look at Web Application Performance

So does your web application perform well? Is it responsive enough so that it increases the web user experience? It is a fact that user can relate only to what they see and what they click, that is the user interface (UI) and the experience with it (UX) largely depends on the performance of the overall
application. But what is the complaint that we usually hear, “this page is slow”, “this grid takes for ever to load”, “we click on that button and then we go for a coffee break”…and UI is always blamed first. We all know that it has to do with the perception a lot, but a services guy ignores it, database developer closes his eyes too, till someone proves where is the actual bottleneck.
Performance Tips
My cose friend and ex-colleague called me the other morning asking me help to figure out why the application was timing out, users are screaming and cursing this particular search page, it started timing out suddenly last week. They did all the usual tricks, iisreset, checking logs, checking db connections, even few server restarts, no avail. So this morning, we started looked thru the logs, stack traces and we could easily spot the culprit, a stored procedure in SQL Server, has gone rogue and slow. So in this case, we need to look closely at that proc, probably data has grown so much in one of the table that now a bad join has created performance issues.

I shared that experience, to point to fact that performance and thus performance tuning has to start at the very tail end (or head depends the way your look at the layered architecture :). So now lets take the layers, there is presentation layer, business layer, database access layer (persistence layer) and then there is the database itself.

Lets start with Database: What are the performance optimizations that you can think of, here is some starters.

  • Good logical design, helps in how db is physically stored, think thru your normalization levels depending on your usage, make sure you dont over-normalize, it will kill your joins and thus leads to bad performing queries.
  • Good key selection, have a good candidate for primary key, have proper foreign keys that are indexed (create non-clusterd index)
  • When writing queries, do not return all columns (i.e., no select * …)
  • Use SET NO COUNT ON, think about providing sql hints
  • There is 2 camps about using temp tables, some hate it, some love it, according to me temp table can be very useful when you are dealing with a large amount of tables to join. Create temp tables to divide and conquer. Always think about “number of rows” in the table when you make these decisions and potential growth of data. Check query plans, avoid table scans.
  • Use sp_executesql when you want to execute a dynamic sql
  • Reduce round trips to the server, try to do bulk operations.
  • This is just few I can think of now, Google database tuning and I am sure you can find a tons of good tuning tips from database gurus.

    Now how about Data Access Layer (DAL)? So what is DAL? DAL is nothing but a mechanism that provides us some CRUD capability along with some transactions, querying and concurrency control. In simpler systems, ADO.Net provides an database independent way to achieve this, So ADO.Net is our DAL layer. Or you might choose to absract the core features provided by ADO.Net into another library where you have tighter control and ability to switch databases at will, a DB Services Factory. If you are following a Domain Driven Design, then your OR/M plays the role of DAL.

    Lets see some ADO.Net related performance tips,

  • For connection pooling to work, always use the same connection string. By default pool size is 100, you can increase this (rarely you will need to)
  • Use db server IP rather than the server name in connection string to avoid a domain name resolution lookup. (This is a performance tip, but not a governance tip, for better governance, always use DNS name)
  • Always close the connections explicitly
  • Connect using a service account. Use windows authentication.
  • Use stored procedures, return only what you need.
  • Use some caching strategy to cache frequently used static data.
  • Again this is tip of the iceberg on what performance considerations you can use, for a more complete ADO.Net performance tuning tips, refer to “Improving .Net Application Performance and Scalability” from Microsoft patterns and practices.

    We will continue to explore performance from a OR/M layer perspective, then a Business Layer and finally UI layer in a future post.

    Until then Cheers and Happy Programming.

    StackOverflow featured as Microsoft Case Study

    I bumped into this Microsoft Case Study showcasing Stack Overflow site and its ASP.Net MVC based architecture. [link]. The point of interest for me was the infrastructure specs for that site…

    # Presentation. Customers browse pages served by Windows Server 2008 Standard, through IIS 7.5. The Web servers are computers with Xeon quad core CPUs and 8 gigabytes (GB) RAM. When a user submits a question or an answer, their information is written to the database using the lightweight LINQ to SQL database abstraction layer.
    # Database. The database is hosted on a dedicated server with 2 Intel Xeon quad core CPUs and 48 GB of RAM, and runs Microsoft SQL Server 2008 Enterprise. Stack Overflow takes advantage of the Online Indexing feature, introduced with SQL Server 2008 Enterprise, which enables table re-indexing while the database continues to run.

    Look at the number of horses that run that SQL Server backend… 48GB of RAM !!!!!!