SQL Server Performance Tuning Tips

Database is the most important and powerful part of any application. If your database is not working properly and taking long time to compute the result, this means something is going wrong in database. Here, database tune up is required, otherwise performance of the application will degrade.
Tuning
I know a lot of articles already published on this topic. But in this article, I tried to provide a list of database tune up tips that will cover all the aspects of database. Database tuning is a very critical and fussy process. It is true that database tuning is a database admin task but we should have the basic level of knowledge for doing this. Because, if we are working on a project where there is no role of admin, then it is our responsibility to the maintain the performance of the database. If performance of database is degraded, then it will cause worst effect on the whole system. In this article, I will explain some basic database tuning tips that I learned from my experience and from my friends who are working as database administrator. Using these tips, you can maintain or upgrade the performance of your database system. Basically, these tips are written for SQL Server but we can implement these into another databases too, like Oracle and MySQL. Please read these tips carefully and at the end of article, let me know if you find something wrong or incorrect.

Avoid Null value in fixed length field

We should avoid the Null value in fixed length fields because if we insert the NULL value in a fixed length field, then it will take the same amount of space as the desired input value for that field. So, if we require a null value in a field, then we should use a variable length field that takes lesser space for NULL. The use of NULLs in a database can reduce the database performance, especially,  in WHERE clauses. For example, try to use varchar instead of char and nvarchar.

  1. Never use Select * Statement:

When we require all the columns of a table, we usually use a “Select *” statement. Well, this is not a good approach because when we use the “select *” statement, the SQL Server converts * into all column names before executing the query, which takes extra time and efforts. So, always provide all the column names in the query instead of “select *”.

Normalize tables in database

Normalized and managed tables increase the performance of a database. So,  always try to perform at least 3rd normal form. It is not necessary that all tables requires 3NF normalization form, but if any table contains 3NF form normalization, then it can be called well structured tables.

Keep Clustered Index Small

Clustered index stores data physically into memory. If the size of a clustered index is very large, then it can reduce the performance. Hence, a large clustered index on a table with a large number of rows increases the size significantly. Never use an index for frequently changed data because when any change in the table occurs, the index is also modified, and that can degrade performance.

Use Appropriate Data type

If we select inappropriate data type, it will reduce the space and enhance the performance; otherwise, it generates the worst effect. So, select an appropriate data type according to the requirement. SQL contains many data types that can store the same type of data but you should select an appropriate data type because each data type has some limitations and advantages upon another one.

Store image path instead of image itself

I found that many developers try to store the image into database instead of the image path. It may be possible that it is requirement of application to store images into database. But generally, we should use image path, because storing image into database increases the database size and reduces the performance.

USE Common Table Expressions (CTEs) instead of Temp table

We should prefer a CTE over the temp table because temp tables are stored physically in a TempDB which is deleted after the session ends. While CTEs are created within memory. Execution of a CTE is very fast as compared to the temp tables and very lightweight too.

Use Appropriate Naming Convention

The main goal of adopting a naming convention for database objects is to make it easily identifiable by the users, its type, and purpose of all objects contained in the database. A good naming convention decreases the time required in searching for an object. A good name clearly indicates the action name of any object that it will perform.

  1. * tblEmployees // Name of table
  2. * vw_ProductDetails // Name of View
  3. * PK_Employees // Name of Primary Key

Use UNION ALL instead of UNION

We should prefer UNION ALL instead of UNION because UNION always performs sorting that increases the time. Also, UNION can’t work with text datatype because text datatype doesn’t support sorting. So, in that case, UNION can’t be used. Thus, always prefer UNION All.

Use Small data type for Index

It is very important to use Small data type for index . Because, the bigger size of data type reduces the performance of Index. For example, nvarhcar(10) uses  20 bytes of data and varchar(10) uses 10 bytes of the data. So, the index for varhcar data type works better. We can also take another example of datetime and int. Datetime data type takes 8 Bytes and int takes 4 bytes. Small datatype means less I/O overhead that increases the performance of the index.

  1. Use Count(1) instead of Count(*) and Count(Column_Name):

There is no difference in the performance of these three expressions; but, the last two expressions are not good considered to be a good  practice. So, always use count(10) to get the numbers of records from a table.

Use Stored Procedure

Instead of using the row query, we should use the stored procedure because stored procedures are fast and easy to maintain for security and large queries.

Use Between instead of In

If Between can be used instead of IN, then always prefer Between. For example, you are searching for an employee whose id is either 101, 102, 103 or 104. Then, you can write the query using the In operator like this:

  1. Select * From Employee Where EmpId In (101,102,103,104)

You can also use Between operator for the same query.

  1. Select * from Employee Where EmpId Between 101 And 104

Use If Exists to determine the record

It has been seen many times that developers use “Select Count(*)” to get the existence of records. For example

  1. Declare @Count int;
  2. Set @Count=(Select * From Employee Where EmpName Like ‘%Pan%’)
  3. If @Count>0
  4. Begin
  5. //Statement
  6. End

But, this is not a proper way for such type of queries. Because, the above query performs  the complete table scan, so you can use If Exists for same query. That will increase the performance of your query, as below.

  1. IF Exists(Select Emp_Name From Employee Where EmpName Like ‘%Pan%’)
  2. Begin
  3. //Statements
  4. End

Never Use ” Sp_” for User Define Stored Procedure

Most programmers use “sp_” for user-defined Stored Procedures. I suggest to never use “sp_” for user-defined Stored Procedure because in SQL Server, the master database has a Stored Procedure with the “sp_” prefix. So, when we create a Stored Procedure with the “sp_” prefix, the SQL Server always looks first in the Master database, then in the user-defined database, which takes some extra time.

Practice to use Schema Name

A schema is the organization or structure for a database. We can define a schema as a collection of database objects that are owned by a single principal and form a single namespace. Schema name helps the SQL Server finding that object in a specific schema. It increases the speed of the query execution. For example, try to use [dbo] before the table name.

Avoid Cursors

A cursor is a temporary work area created in the system memory when a SQL statement is executed. A cursor is a set of rows together with a pointer that identifies the current row. It is a database object to retrieve the data from a result set one row at a time. But, use of a cursor is not good because it takes long time because it fetches data row by row. So, we can use a replacement of cursors. A temporary table for or While loop may be a replacement of a cursor in some case.

SET NOCOUNT ON

When an INSERT, UPDATE, DELETE or SELECT command is executed, the SQL Server returns the number affected by the query. It is not good to return the number of rows affected by the query. We can stop this by using NOCOUNT ON.

Use Try–Catch

In T-SQL, a Try-Catch block is very important for exception handling. A best practice and use of a Try-Catch block in SQL can save our data from undesired changes. We can put all T-SQL statements in a TRY BLOCK and the code for exception handling can be put into a CATCH block.

Remove Unused Index

Remove all unused indexes because indexes are always updated when the table is updated so the index must be maintained even if not used.

Always create index on table

An index is a data structure to retrieve fast data. Indexes are special lookup tables that the database search engine can use to speed up data retrieval. Simply an index is a pointer to data in a table. Mainly an index increases the speed of data retrieval. So always try to keep minimum one index on each table it may be either clustered or non-clustered index.

Use Foreign Key with appropriate action

A foreign key is a column or combination of columns that is the same as the primary key, but in a different table. Foreign keys are used to define a relationship and enforce integrity between two tables. In addition to protecting the integrity of our data, FK constraints also help document the relationships between our tables within the database itself. Also define a action rules for delete and update command , you can select any action among the No Action, Set NULL, Cascade and set default.

Use Alias Name

Aliasing renames a table or a column temporarily by giving another name. The use of table aliases means to rename a table in a specific SQL statement. Using aliasing, we can provide a small name to large name that will save our time.

Use Transaction Management

A transaction is a unit of work performed against the database. A transaction is a set of work (T-SQL statements) that execute together like a single unit in a specific logical order as a single unit. If all the statements are executed successfully then the transaction is complete and the transaction is committed and the data will be saved in the database permanently. If any single statement fails then the entire transaction will fail and then the complete transaction is either cancelled or rolled back.

Use Index Name in Query

Although in most cases the query optimizer will pick the appropriate index for a specific table based on statistics, sometimes it is better to specify the index name in your SELECT query.

Example

  1. SELECT
  2. e.Emp_IId,
  3. e.First_Name,
  4. e.Last_Name
  5. FROM dbo.EMPLOYEE e
  6. WITH (INDEX (Clus_Index))
  7. WHERE e.Emp_IId > 5
  8. Select Limited Data

We should retrieve only the required data and ignore the unimportant data. The less data retrieved, the faster the query will run. Rather than filtering on the client, push as much filtering as possible on the server-end. This will result in less data being sent on the wire and you will see results much faster.

Drop Index before Bulk Insertion of Data

We should drop the index before insertion of a large amount of data. This makes the insert statement run faster. Once the inserts are completed, you can recreate the index again.

Use Unique Constraint and Check Constraint

A Check constraint checks for a specific condition before inserting data into a table. If the data passes all the Check constraints then the data will be inserted into the table otherwise the data for insertion will be discarded. The CHECK constraint ensures that all values in a column satisfies certain conditions.

A Unique Constraint ensures that each row for a column must have a unique value. It is like a Primary key but it can accept only one null value. In a table one or more column can contain a Unique Constraint. So we should use a Check Constraint and Unique Constraint because it maintains the integrity in the database.

Importance of Column Order in index

If we are creating a Non-Clustered index on more than one column then we should consider the sequence of the columns. The order or position of a column in an index also plays a vital role in improving SQL query performance. An index can help to improve the SQL query performance if the criteria of the query matches the columns that are left most in the index key. So we should place the most selective column at left most side of a non-clustered index.

Recompiled Stored Procedure

We all know that Stored Procedures execute T-SQL statements in less time than the similar set of T-SQL statements are executed individually. The reason is that the query execution plan for the Stored Procedures are already stored in the “sys.procedures” system-defined view. We all know that recompilation of a Stored Procedure reduces SQL performance. But in some cases it requires recompilation of the Stored Procedure. Dropping and altering of a column, index and/or trigger of a table. Updating the statistics used by the execution plan of the Stored Procedure. Altering the procedure will cause the SQL Server to create a new execution plan.

Use Sparse Column

Sparse column provide better performance for NULL and Zero data. If you have any column that contain large amount numbers of NULL and Zero then prefer Sparse Column instead of default column of SQL Server. Sparse column take lesser space then regular column (without SPARSE clause).

Example

  1. Create Table Table_Name
  2. (
  3. Id int, //Default Column
  4. Group_Id int Sparse // Sparse Column
  5. )

Avoid Loops In Coding

Suppose you want to insert 10 of records into table then instead of using a loop to insert the data into table you can insert all data using single insert query.

  1. declare @int int;
  2. set @int=1;
  3. while @int<=10
  4. begin
  5. Insert Into Tab values(@int,‘Value’+@int);
  6. set @int=@int+1;
  7. end

Above method is not a good approach to insert the multiple records instead of this you can use another method like below.

  1. Insert Into Tab values(1,‘Value1’),(2,‘Value2’),(3,‘Value3’),(4,‘Value4’),(5,‘Value5’),(6,‘Value6’),(7,‘Value7’),(8,‘Value8’),(9,‘Value9’),(10,‘Value10’);

Avoid Correlated Queries

In A Correlated query inner query take input from outer(parent) query, this query run for each row that reduce the performance of database.

  1. Select Name, City, (Select Company_Name
  2. from
  3. Company where companyId=cs.CustomerId) from Customer cs

The best method is that wehsould be prefer the join instaed of correlated query as below.

  1. Select cs.Name, cs.City, co.Company_Name
  2. from Customer cs
  3. Join
  4. Company co
  5. on
  6. cs.CustomerId=co.CustomerId

Avoid index and join hints

In some cases, index and join hint may be increase the performance of database, but if you provide any join or index hint then server always try to use the hint provided by you although it have a better execution plan, so such type of approach may be reduce the database performance. Use Join or index hint if you are confidence that there is not any better execution plan. If you have any doubt then make server free to choose a execution plan.

Avoid Use of Temp table

Avoid use of temp table as much as you can because temp table is created into temp database like any basic table structure. After completion of the task, we require to drop the temp table. That rises the load on database. You can prefer the table variable instead of this.

Use Index for required columns

Index should be created for all columns which are using Where, Group By, Order By, Top, and Distinct command.

Don’t use Index for following condition

It is true that use of index makes the fast retrieval of the result. But, it is not always true. In some cases, the use of index doesn’t affect the performance of the query. In such cases, we can avoid the use of index.

  1. When size of table is very small.
  2. Index is not used in query optimizer
  3. DML(insert, Update, Delete) operation are frequent used.
  4. Column contain TEXT, nText type of data.

Use View for complex queries

If you are using join on two or more tables and the result of queries is frequently used, then it will be better to make a View that will contain the result of the complex query. Now, you can use this View multiple times, so that you don’t have to execute the query multiple times to get the same result.

Make Transaction short

It will be better to keep the transaction as short as we can. Because, big size of transaction makes the table locked and reduces the database concurrency. So, always try to make shorter transactions.

Use Full-text Index

If your query contains multiple wild card searches using LIKE(%%), then use of Full-text Index can increase the performance. Full-text queries can include simple words and phrases or multiple forms of a word or phrase. A full-text query returns any document that contains at least one match (also known as a hit). A match occurs when a target document contains all the terms specified in the Full-text query, and meets any other search conditions, such as the distance between the matching terms.

Thanks for reading the article. As I have asked in the starting, if you have any doubt or I wrote something wrong, then write me back in comments section.

 

LINK: http://www.c-sharpcorner.com/article/sql-server-performance-tuning-tips/

Advertisements

Hierarchy ID in SQL Server

– Cơ bản nó giống việc lưu trữ phân cấp danh mục cha con

Introduction

SQL Server 2008 has a new data type HierarchyID to store hierarchical data. This tip gives a good step by step introduction about this data type.

Step by Step Tutorial

HierarchyID data type was introduced in SQL Server 2008. It is a CLR data type. First, we will create a table to store our hierarchical data.

 Collapse | Copy Code
CREATE TABLE H (
   Node HierarchyID PRIMARY KEY CLUSTERED,
   NodeLevel AS Node.GetLevel(),
   ID INT UNIQUE NOT NULL,
   Name VARCHAR(50) NOT NULL
 )

Node is the column which has the HierarchyID type, NodeLevel is a calculated column which has the level of a particular node. ID and Name are custom columns for additional information.

HierarchyID data type maps the data as a tree, so when traversing the tree structure, 2 types of methods could be possible:

  1. Depth First
  2. Breadth First

In order to make the query execution efficient, we create indexes; in HierarchyID also, we create indexes in order to make the above mentioned traversal efficient. Since we have a clustered index defined in the Node column, we get Depth First index by default. If you want to have a Breadth First index, execute the following query:

 Collapse | Copy Code
 CREATE UNIQUE INDEX bfs_index
   ON H (NodeLevel,Node)

In real scenarios, you do not need to create both indexes, it depends on your requirement to decide which index to be created. Sometimes, you may need both as well.

Insert the Top Node.

 Collapse | Copy Code
INSERT INTO H (Node, ID, Name)
   VALUES (HierarchyId::GetRoot(), 1, 'Thuru')

Here we use the HierarchyId::GetRoot() as the value for the Node. Few things to note here, we ask the SQL Server to give us the HierarchyID of the root node. We use the SQL CLR function GetRoot for this. The :: is marked because GetRoot is a static method inside the HierarchyID SQL CLR data type.

Execute the following SELECT statement. Here NodeText is the string representation of our Node value.

 Collapse | Copy Code
SELECT Node.ToString() AS NodeText, * FROM H

Let’s add a child node to our root node. We use GetDescendant method on the node to which we add child nodes.GetDescendant method takes 2 arguments indicating the left and right nodes on the child level respectively. As of now we have only the top node, it does not have any children nodes. So obviously, no left or right nodes in the child level. So we pass NULL.

 Collapse | Copy Code
DECLARE @parent HierarchyId = HierarchyId::GetRoot()
INSERT INTO H (Node,ID,Name) VALUES (@parent.GetDescendant(NULL,NULL),2,'Johnny')

In the above code, we get the top node value using the in build function, but we could have queried that using other known elements such as asking for the Node of the element who’s ID = 1 or Name = 'Thuru'.

After the above insertion, we have the following structure defined.

Let’s add another person below ‘Thuru’. Note this time I get the parent node using the SELECT statement which asks for the Node of ‘Thuru’. And now, I want to insert the new node left side of Johnny. So for the new node, the right sibling is Johnny and the left sibling is null. The query goes like this:

 Collapse | Copy Code
DECLARE @parent HierarchyId = (SELECT Node FROM H WHERE Name = 'Thuru')
DECLARE @jhony HierarchyId = (SELECT Node FROM H WHERE name = 'Johnny')
INSERT INTO H (Node,ID,Name) VALUES (@parent.GetDescendant(@jhony,NULL), 3, 'Robert')

This creates the following hierarchical structure:

Now it’s a simple guess for you to decide on what should be done to insert a new node between Johnny and Robert. We have the left node (Johnny) and the right node (Robert).

 Collapse | Copy Code
DECLARE @parent HierarchyId = (SELECT Node FROM H WHERE Name = 'Thuru')
DECLARE @johnny HierarchyId = (SELECT Node FROM H WHERE Name = 'Johnny')
DECLARE @robert HierarchyId = (SELECT Node FROM H WHERE Name = 'Robert')
INSERT INTO H (Node,ID,Name) VALUES (@parent.GetDescendant(@johnny,@robert), 4, 'Alex')

Let’s add some more nodes:

 Collapse | Copy Code
DECLARE @parent HierarchyId = (SELECT Node FROM H WHERE Name = 'Thuru')
DECLARE @alex HierarchyId = (SELECT Node FROM H WHERE Name = 'Alex')
DECLARE @robert HierarchyId = (SELECT Node FROM H WHERE Name = 'Robert')
INSERT INTO H (Node,ID,Name) VALUES (@parent.GetDescendant(@alex,@robert), 5, 'Niki')
GO

DECLARE @parent HierarchyId = (SELECT Node FROM H WHERE Name = 'Thuru')
DECLARE @alex HierarchyId = (SELECT Node FROM H WHERE Name = 'Alex')
DECLARE @niki HierarchyId = (SELECT Node FROM H WHERE Name = 'Niki')
INSERT INTO H (Node,ID,Name) VALUES (@parent.GetDescendant(@alex,@niki), 6, 'Steve')
GO

Now we have the following structure defined:

Let’s add 2 children nodes for ‘Steve‘:

 Collapse | Copy Code
DECLARE @steveParent HierarchyId = (SELECT Node FROM H WHERE Name = 'Steve')
INSERT INTO H (Node,ID,Name) VALUES (@steveParent.GetDescendant(NULL,NULL), 7, 'S1')
GO

DECLARE @steveParent HierarchyId = (SELECT Node FROM H WHERE Name = 'Steve')
DECLARE @s1 HierarchyId = (SELECT Node FROM H WHERE Name = 'S1')
INSERT INTO H (Node,ID,Name) VALUES (@steveParent.GetDescendant(@s1,NULL), 8, 'S2')
GO

Now we have the following structure:

IsDescendantOf

This function returns ‘TRUE‘ if the passed node is a descendant of a particular mode.

 Collapse | Copy Code
DECLARE @parent HierarchyId = (SELECT Node FROM H WHERE Name = 'Thuru')
DECLARE @parentNodeLevel int = (select NodeLevel from h where name = 'Thuru')

SELECT Node.ToString() AS NodeText, *  FROM H WHERE Node.IsDescendantOf(@parent) = 'TRUE'     AND Node != @parent AND NodeLevel = @parentNodeLevel + 1
GO

In the above query, I restricted to return only the immediate children using the NodeLevel column(@parentNodeLevel + 1). And also notice that I have opted out the parent node because IsDescendantOffunction includes the parent node as well in the result.

We will get this result:

GetAncestor

As its name implies, this function returns the ancestors of a specified node in the specified level.

 Collapse | Copy Code
DECLARE @child HierarchyId = (SELECT Node FROM H WHERE Name = 'S1')
SELECT * FROM H WHERE Node = @child.GetAncestor(2)

The following query gives the immediate parent node.

 Collapse | Copy Code
DECLARE @child HierarchyId = (SELECT Node FROM H WHERE Name = 'S1')
SELECT * FROM H WHERE Node = @child.GetAncestor(1)

When is passed as the argument, it returns the siblings:

 Collapse | Copy Code
DECLARE @child HierarchyId = (SELECT Node FROM H WHERE Name = 'S1')
SELECT * FROM H WHERE Node = @child.GetAncestor(0)

Deletion

Deleting a node does not automatically delete the child nodes, this would result in orphaned children.

 Collapse | Copy Code
DELETE FROM H WHERE Name = 'Steve'

Note that S1 and S2 have the NodeText /1.1.1/1 and /1.1.1/2 where there’s no /1.1.1 resulting orphaned S1and S2.

Moving Nodes

The GetReparentedValue function is used to move the nodes to different locations:

 Collapse | Copy Code
DECLARE @newParent HierarchyId = (SELECT Node FROM H WHERE name = 'Johnny')
UPDATE H SET Node = Node.GetReparentedValue(Node.GetAncestor(1),@newParent)
       WHERE Name = 'S1'

The above query moves S1 under Johnny.

Points of Interest

Microsoft recommends to use proper stored procedures to match our business scenario to deal with theHierarchyID thus eliminating unwanted results like orphaned children.

Conclusion

This article explains few very fundamental functions of HierarchyID data type. This is sufficient for most of the time, you can explore other additional functions in this TechNet article.

Link gốc: http://www.codeproject.com/Tips/740553/Hierarchy-ID-in-SQL-Server

 

Truncate Log File in SQL Server 2005

Introduction

People who started using SQL 2005 and later versions may sometimes encounter the problem of increasing the size of the Log file.

The log file is the storage of all the transactions done in the Database, and hence it contains details about each and every action done.

Many developers who are not so familiar with database administration may get confused how to manage this huge size and may wonder what the impact can be on their production environment.

Here I have written a small tool that can be used to trim (shrink the log file to the minimum level), I hope it helps.

The Purpose of the Log File

When SQL Server is functioning and operating, the database engine keeps track of almost every change that takes place within the database by making entries into the transaction log so that it can be used later if needed.

The location of the SQL Server transaction log is configured at the same time the database is created. When creating a database, the location of the SQL Server transaction log is specified as well as other options associated with the transaction log.

What is Recovery Model

Recovery model is the way you want the transactions to be registered into the log file. Since log files include all the transactions in the database, they can be used for recovery. There are 3 types of recovery, here they are, and what they do.

 

Simple Recovery Model

The simple recovery model is just that: simple. In this approach, SQL Server maintains only a minimal amount of information in the transaction log. SQL Server truncates the transaction log each time the database reaches a transaction checkpoint, leaving no log entries for disaster recovery purposes.

In databases using the simple recovery model, you may restore full or differential backups only.

Full Recovery Model

In the event of a database failure, you have the most flexibility restoring databases using the full recovery model. In addition to preserving data modifications stored in the transaction log, the full recovery model allows you to restore a database to a specific point in time.

The bulk-logged recovery model is a special-purpose model that works in a similar manner to the full recovery model. The only difference is in the way it handles bulk data modification operations. The bulk-logged model records these operations in the transaction log using a technical known as minimal logging. This saves significantly on processing time, but prevents you from using the point-in-time restore option.

The Code

The core of the tool lays below in this function.

This function as you can see, takes 3 parameters, the name of the database in concern, the server name, and the new desirable size of the database, which I am assuming to be 0 (it would be 128KB) as minimum.

  • Here it connects to the database
  • Gets the size of the log file
  • Changes the recovery model to Simple
  • Issues a checkpoint against the database to write the records from the transaction log to the database
  • Empties the Log file, but not the size of the file (it means it deletes the data from the file increasing the blank empty space in the file) because as you may know the size of the log files and the data files do not express exactly the size of the data within, usually they have an empty space with increment policy.
  • Reduces the size of the file.
  • Gets the new size and displays a small report about the results.
 Collapse | Copy Code
private static void ShrinkDatabase(string Database, string ServerName, int NewSize)
{
    int OldSize;
    SqlConnection Cnn = null;
    try
    {
        Cnn =
        new SqlConnection(string.Format
		("trusted_connection=true; database={0};server={1}", Database,
                  ServerName));
        Cnn.Open();

        SqlCommand Cmm = new SqlCommand("", Cnn);
        Cmm.CommandType = CommandType.Text;
        Cmm.CommandText = string.Format("SELECT [size] 
				FROM sysfiles WHERE name LIKE '%LOG%'");
        OldSize = (int) Cmm.ExecuteScalar();
        Cmm.CommandText = string.Format("ALTER DATABASE {0} 
				SET RECOVERY SIMPLE", Database);
        Cmm.ExecuteNonQuery();
        Cmm.CommandText = string.Format("CHECKPOINT");
        Cmm.ExecuteNonQuery();//issue a checkpoint against the database to 
			//write the records from the transaction log to the database.
        Cmm.CommandText = string.Format("BACKUP LOG [{0}] WITH NO_LOG", Database);
        Cmm.ExecuteNonQuery();	//This empties the log file but does not 
					//reduce its size.
        Cmm.CommandText = string.Format("SELECT Name FROM sysfiles 
			WHERE name LIKE '%LOG%'");
        Database = (string) Cmm.ExecuteScalar();
        Cmm.CommandText = string.Format("DBCC SHRINKFILE ({0}, {1})", 
						Database, NewSize);
        Cmm.ExecuteNonQuery();//reduce the size of the individual LDF file(s).
        Cmm.CommandType = System.Data.CommandType.Text;
        Cmm.CommandText = "SELECT [size] FROM sysfiles WHERE name LIKE '%LOG%'";
        NewSize = (int) Cmm.ExecuteScalar();
        Console.WriteLine(" The Old Size was {0}KB \n 
		The New Size is {1}KB \n The Logfile has shrinked with 
		{2}KB \n and you gained {3}% of the file size",OldSize , 
		NewSize , OldSize - NewSize,
                  (100-(NewSize *100.00/ OldSize)).ToString("0.00")); 
     }
     catch(Exception Ex)
     {
         Console.WriteLine(Ex.ToString () ); 
     }
     finally
     {
         if (Cnn != null)
         {
             Cnn.Close();
             Cnn.Dispose();
         } 
     }
 }

One helpful function is the one below that helps you to display the database and their sizes, and that can simply be done by calling the system stored procedure “sp_databases”, However, our concern is the log file, not the data file and therefore we need to inquire the size of the log for each database.

 Collapse | Copy Code
 private static SortedList<int, string> DisplayDatabases(string Servername)
 {
     SqlConnection Cnn = null; SqlCommand Cmm= null;
     SortedList<int, string> Result = new SortedList<int, string>();
     try
     {
         Cnn = new SqlConnection( string.Format 
			( "trusted_connection=true; server={0}", Servername));
         SqlConnection CnnLogSize=new SqlConnection( string.Format 
			( "trusted_connection=true; server={0}", Servername));
         Cmm = new SqlCommand("sp_databases", Cnn);
         Cmm.CommandType = CommandType.StoredProcedure;
         Cnn.Open();
         SqlDataReader sqlDataReader = Cmm.ExecuteReader	
				(CommandBehavior.CloseConnection);
         int Index=0;
         Console.Write("#".PadRight(4));
         Console.Write("Name".PadRight(40));
         Console.Write("\t");
         Console.Write("Size".PadRight(20));
         Console.WriteLine("Log Size");
         Console.WriteLine
	("--------------------------------------------------------------------");
         string sqlQueryLogSize = null;
         SqlCommand cmdLogSize = new SqlCommand();
         if (sqlDataReader != null)
             while (sqlDataReader.Read ())
             {
                 sqlQueryLogSize = string.Format("use {0} 
			SELECT [size] FROM sysfiles where name like 
			'%{0}%' and name like '%log%'", sqlDataReader.GetString(0));
                 cmdLogSize.CommandText = sqlQueryLogSize;
                 cmdLogSize.Connection = CnnLogSize ; 
                 CnnLogSize.Open();
                 var LogFileSize = cmdLogSize.ExecuteScalar();
                 CnnLogSize.Close();
                 Console.Write(Index.ToString ().PadRight (4) );
                 Console.Write(sqlDataReader.GetString(0).PadRight (40));
                 Console.Write("\t");
                 Console.Write(sqlDataReader.GetInt32 (1).ToString ().PadRight (20));
                 if (LogFileSize != null) Console.Write(LogFileSize.ToString());
                     Console.WriteLine();
                 Result.Add(Index++, sqlDataReader.GetString(0));
            }

        if (sqlDataReader != null)
        {
             sqlDataReader.Close();
             sqlDataReader.Dispose();
        }
        cmdLogSize.Dispose();
        CnnLogSize.Dispose();
 }
 catch (Exception Ex)
 {
     Console.WriteLine(Ex.ToString());
 }

 finally
 {
     if (Cnn != null)
     {
         Cnn.Close();
         Cnn.Dispose();
     }
 }
 Console.WriteLine
	("--------------------------------------------------------------------");
 Console.WriteLine("Please Choose a number from 0 to {0}, 
	Hit Ctrl+C to exit", Result.Count - 1);
 return Result;
 }

The main function here will just call the above functions and display some messages for the user to decide which database he/she wants to truncate the log file of.

 Collapse | Copy Code
 static void Main(string[] args)
 {
     string ServerName;
     if (args.Length == 0 )ServerName = "Localhost";
     else ServerName = args[0] ;
     A:
     SortedList<int, string> L = DisplayDatabases(ServerName);
     string Input = Console.ReadLine();
     int IntInput=0;
     bool isInt= int.TryParse(Input, out IntInput);

     while (!(IntInput > -1 && IntInput < L.Count && isInt))
     {
         L = DisplayDatabases(ServerName );
         Input = Console.ReadLine();
         isInt = int.TryParse(Input, out IntInput);
     }
     ShrinkDatabase(L[IntInput], ServerName, 0);
     Console.WriteLine("------------------------------");
     Console.WriteLine("Do you want to Shrink another? hit Y if Yes, Otherwise N"); 
     Input = Console.ReadLine();
     if (Input != null) if (Input.ToLower() =="y" ) goto A;
 }

Link gốc: http://www.codeproject.com/Articles/62632/Truncate-Log-File-in-SQL-Server-2005

SQL SERVER với lỗi 23 Data Error (cyclic redundancy check)

Hôm nay gặp lỗi khi backup Database sử dụng SQL Server 2005 toàn báo lỗi
23 Data Error (cyclic redundancy check)

DBCC results for 'vinacorp'.
Service Broker Msg 9675, State 1: Message Types analyzed: 14.
Service Broker Msg 9676, State 1: Service Contracts analyzed: 6.
Service Broker Msg 9667, State 1: Services analyzed: 3.
Service Broker Msg 9668, State 1: Service Queues analyzed: 3.
Service Broker Msg 9669, State 1: Conversation Endpoints analyzed: 0.
Service Broker Msg 9674, State 1: Conversation Groups analyzed: 0.
Service Broker Msg 9670, State 1: Remote Service Bindings analyzed: 0.
Msg 8966, Level 16, State 2, Line 1
Unable to read and latch page (1:431402) with latch type SH. 23(Data error (cyclic redundancy check).) failed. 
Msg 8966, Level 16, State 2, Line 1
Unable to read and latch page (1:473633) with latch type SH. 23(Data error (cyclic redundancy check).) failed. 
CHECKDB found 0 allocation errors and 2 consistency errors not associated with any single object.
DBCC results for 'sys.sysrowsetcolumns'.
There are 1369 rows in 13 pages for object "sys.sysrowsetcolumns".
DBCC results for 'sys.sysrowsets'.
There are 195 rows in 2 pages for object "sys.sysrowsets".
DBCC results for 'sysallocunits'.
There are 242 rows in 3 pages for object "sysallocunits".
DBCC results for 'sys.sysfiles1'.
There are 2 rows in 1 pages for object "sys.sysfiles1".
DBCC results for 'sys.syshobtcolumns'.
There are 1369 rows in 14 pages for object "sys.syshobtcolumns".
DBCC results for 'sys.syshobts'.
There are 195 rows in 2 pages for object "sys.syshobts".
DBCC results for 'sys.sysftinds'.
There are 0 rows in 0 pages for object "sys.sysftinds".
DBCC results for 'sys.sysserefs'.
There are 242 rows in 1 pages for object "sys.sysserefs".
DBCC results for 'sys.sysowners'.
There are 14 rows in 1 pages for object "sys.sysowners".
DBCC results for 'sys.sysprivs'.
There are 134 rows in 1 pages for object "sys.sysprivs".
DBCC results for 'sys.sysschobjs'.
There are 651 rows in 14 pages for object "sys.sysschobjs".
DBCC results for 'sys.syscolpars'.
There are 2290 rows in 44 pages for object "sys.syscolpars".
DBCC results for 'sys.sysnsobjs'.
There are 1 rows in 1 pages for object "sys.sysnsobjs".
DBCC results for 'sys.syscerts'.
There are 0 rows in 0 pages for object "sys.syscerts".
DBCC results for 'sys.sysxprops'.
There are 9 rows in 1 pages for object "sys.sysxprops".
DBCC results for 'sys.sysscalartypes'.
There are 27 rows in 1 pages for object "sys.sysscalartypes".
DBCC results for 'sys.systypedsubobjs'.
There are 5 rows in 1 pages for object "sys.systypedsubobjs".
DBCC results for 'sys.sysidxstats'.
There are 440 rows in 13 pages for object "sys.sysidxstats".
DBCC results for 'sys.sysiscols'.
There are 840 rows in 10 pages for object "sys.sysiscols".
DBCC results for 'sys.sysbinobjs'.
There are 23 rows in 1 pages for object "sys.sysbinobjs".
DBCC results for 'sys.sysobjvalues'.
There are 978 rows in 347 pages for object "sys.sysobjvalues".
DBCC results for 'sys.sysclsobjs'.
There are 24 rows in 1 pages for object "sys.sysclsobjs".
DBCC results for 'sys.sysrowsetrefs'.
There are 18 rows in 1 pages for object "sys.sysrowsetrefs".
DBCC results for 'sys.sysremsvcbinds'.
There are 0 rows in 0 pages for object "sys.sysremsvcbinds".
DBCC results for 'sys.sysxmitqueue'.
There are 0 rows in 0 pages for object "sys.sysxmitqueue".
DBCC results for 'sys.sysrts'.
There are 1 rows in 1 pages for object "sys.sysrts".
DBCC results for 'sys.sysconvgroup'.
There are 0 rows in 0 pages for object "sys.sysconvgroup".
DBCC results for 'sys.sysdesend'.
There are 0 rows in 0 pages for object "sys.sysdesend".
DBCC results for 'sys.sysdercv'.
There are 0 rows in 0 pages for object "sys.sysdercv".
DBCC results for 'sys.syssingleobjrefs'.
There are 205 rows in 1 pages for object "sys.syssingleobjrefs".
DBCC results for 'sys.sysmultiobjrefs'.
There are 2670 rows in 24 pages for object "sys.sysmultiobjrefs".
DBCC results for 'sys.sysdbfiles'.
There are 2 rows in 1 pages for object "sys.sysdbfiles".
DBCC results for 'sys.sysguidrefs'.
There are 0 rows in 0 pages for object "sys.sysguidrefs".
DBCC results for 'sys.sysqnames'.
There are 91 rows in 1 pages for object "sys.sysqnames".
DBCC results for 'sys.sysxmlcomponent'.
There are 93 rows in 1 pages for object "sys.sysxmlcomponent".
DBCC results for 'sys.sysxmlfacet'.
There are 97 rows in 1 pages for object "sys.sysxmlfacet".
DBCC results for 'sys.sysxmlplacement'.
There are 17 rows in 1 pages for object "sys.sysxmlplacement".
DBCC results for 'sys.sysobjkeycrypts'.
There are 0 rows in 0 pages for object "sys.sysobjkeycrypts".
DBCC results for 'sys.sysasymkeys'.
There are 0 rows in 0 pages for object "sys.sysasymkeys".
DBCC results for 'sys.syssqlguides'.
There are 0 rows in 0 pages for object "sys.syssqlguides".
DBCC results for 'sys.sysbinsubobjs'.
There are 0 rows in 0 pages for object "sys.sysbinsubobjs".
DBCC results for 'stock_relation'.
There are 52 rows in 1 pages for object "stock_relation".
DBCC results for 'stock_owners'.
There are 120 rows in 4 pages for object "stock_owners".
DBCC results for 'news_category_summaries'.
There are 24 rows in 1 pages for object "news_category_summaries".
DBCC results for 'market_information'.
There are 12332 rows in 103 pages for object "market_information".
DBCC results for 'crawler_manager'.
There are 14 rows in 1 pages for object "crawler_manager".
DBCC results for 'user_log_status'.
There are 786 rows in 5 pages for object "user_log_status".
DBCC results for 'crawler_hash'.
There are 0 rows in 0 pages for object "crawler_hash".
DBCC results for 'IPOItem'.
There are 178 rows in 24 pages for object "IPOItem".
DBCC results for 'sysdiagrams'.
There are 0 rows in 0 pages for object "sysdiagrams".
DBCC results for 'download_queue'.
There are 7953 rows in 417 pages for object "download_queue".
DBCC results for 'users_stocks_relation'.
There are 2977 rows in 31 pages for object "users_stocks_relation".
DBCC results for 'tag_map'.
There are 92674 rows in 289 pages for object "tag_map".
DBCC results for 'stock_financial_index'.
There are 1270156 rows in 9703 pages for object "stock_financial_index".
DBCC results for 'stock_events'.
There are 8985 rows in 258 pages for object "stock_events".
DBCC results for 'crawler_status'.
There are 6114 rows in 52 pages for object "crawler_status".
DBCC results for 'IPOResult'.
There are 166 rows in 3 pages for object "IPOResult".
DBCC results for 'stock_owner_relation'.
There are 150 rows in 2 pages for object "stock_owner_relation".
DBCC results for 'stock_item'.
There are 3774 rows in 110 pages for object "stock_item".
DBCC results for 'stock_event_categories'.
There are 10 rows in 1 pages for object "stock_event_categories".
DBCC results for 'stock_dividend_payments'.
There are 2085 rows in 26 pages for object "stock_dividend_payments".
DBCC results for 'stock_financial_report'.
There are 37129 rows in 1435 pages for object "stock_financial_report".
DBCC results for 'stock_item_property'.
There are 44854 rows in 572 pages for object "stock_item_property".
DBCC results for 'news_item_summaries'.
There are 233482 rows in 24895 pages for object "news_item_summaries".
DBCC results for 'stock_capital_structure_changes'.
There are 2932 rows in 48 pages for object "stock_capital_structure_changes".
DBCC results for 'news_item_summaries_detail'.
There are 0 rows in 0 pages for object "news_item_summaries_detail".
DBCC results for 'news_item_comments'.
There are 1148 rows in 95 pages for object "news_item_comments".
DBCC results for 'stock_base_index_summaries'.
There are 3319 rows in 89 pages for object "stock_base_index_summaries".
DBCC results for 'news_subjects'.
There are 31 rows in 1 pages for object "news_subjects".
DBCC results for 'stock_profile_visit'.
Msg 8978, Level 16, State 1, Line 1
Table error: Object ID 1549248574, index ID 5, partition ID 72057594054377472, alloc unit ID 72057594061914112 (type In-row data). Page (1:371433) is missing a reference from previous page (1:473633). Possible chain linkage problem.
Msg 2533, Level 16, State 1, Line 1
Table error: page (1:431402) allocated to object ID 1549248574, index ID 5, partition ID 72057594054377472, alloc unit ID 72057594061914112 (type In-row data) was not seen. The page may be invalid or may have an incorrect alloc unit ID in its header.
Msg 8976, Level 16, State 1, Line 1
Table error: Object ID 1549248574, index ID 5, partition ID 72057594054377472, alloc unit ID 72057594061914112 (type In-row data). Page (1:431402) was not seen in the scan although its parent (1:400032) and previous (1:472538) refer to it. Check any previous errors.
Msg 8978, Level 16, State 1, Line 1
Table error: Object ID 1549248574, index ID 5, partition ID 72057594054377472, alloc unit ID 72057594061914112 (type In-row data). Page (1:461595) is missing a reference from previous page (1:431402). Possible chain linkage problem.
Msg 2533, Level 16, State 1, Line 1
Table error: page (1:473633) allocated to object ID 1549248574, index ID 5, partition ID 72057594054377472, alloc unit ID 72057594061914112 (type In-row data) was not seen. The page may be invalid or may have an incorrect alloc unit ID in its header.
Msg 8976, Level 16, State 1, Line 1
Table error: Object ID 1549248574, index ID 5, partition ID 72057594054377472, alloc unit ID 72057594061914112 (type In-row data). Page (1:473633) was not seen in the scan although its parent (1:400032) and previous (1:433473) refer to it. Check any previous errors.
There are 1207718 rows in 11698 pages for object "stock_profile_visit".
CHECKDB found 0 allocation errors and 6 consistency errors in table 'stock_profile_visit' (object ID 1549248574).
DBCC results for 'stock_major_shareholders_transactions'.
There are 10443 rows in 359 pages for object "stock_major_shareholders_transactions".
DBCC results for 'macro_index'.
There are 182 rows in 2 pages for object "macro_index".
DBCC results for 'user_log'.
There are 65210 rows in 1136 pages for object "user_log".
DBCC results for 'stock_major_shareholders'.
There are 3059 rows in 36 pages for object "stock_major_shareholders".
DBCC results for 'stock_category'.
There are 89 rows in 3 pages for object "stock_category".
DBCC results for 'tag'.
There are 3939 rows in 30 pages for object "tag".
DBCC results for 'stock_summary'.
There are 3772 rows in 1270 pages for object "stock_summary".
DBCC results for 'sys.queue_messages_1977058079'.
There are 0 rows in 0 pages for object "sys.queue_messages_1977058079".
DBCC results for 'sys.queue_messages_2009058193'.
There are 0 rows in 0 pages for object "sys.queue_messages_2009058193".
DBCC results for 'sys.queue_messages_2041058307'.
There are 0 rows in 0 pages for object "sys.queue_messages_2041058307".
CHECKDB found 0 allocation errors and 8 consistency errors in database 'vinacorp'.
repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (vinacorp).
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Dùng lệnh DBCC CHECKDB (‘dbname’, REPAIR_ALLOW_DATA_LOSS) nhưng vẫn không thành công
– Sau một hồi search thì tìm ra cách này chia sẻ mọi người:

Lưu ý mọi người: dù là action gì đi nữa thì vẫn phải cẩn thận, tốt nhất là bạn backup 1 bản trước khi làm vì nếu có sai sót gì thì còn có cái mà restore, trong trường hợp của mình không backup được, mình đã phải chuyển sang chế độ offline rồi copy file sang chỗ khác cho an toàn “Cẩn tắc vô áy náy mà”
Bước 1: Chuyển Database sang chế độ 1 người (single user) thay vì nhièu người dùng (multiple user)

ALTER DATABASE Test_DB SET SINGLE_USER WITH NO_WAIT

Lưu ý, nếu vẫn còn 1 Proces nào truy cập tới database của bạn bạn phải chạy kill process bằng tay

exec sp_who2
kill PID

Bước 2: Repair lại

dbcc checkdb ('Test_DB',repair_rebuild)

Bước 3: Sửa lỗi

dbcc checkdb ('Test_DB',repair)

Bước 4: Bật lại chế độ multple user

ALTER DATABASE Test_DB SET MULTI_USER WITH NO_WAIT

– Sau đó thử bacck up database chạy ngon, không găp lỗi nữa

Một số link tham khảo:

http://dba.stackexchange.com/questions/11204/a-nonrecoverable-i-o-error-occurred-on-file-after-db-shrink

http://www.sqlservercentral.com/Forums/Topic519683-266-1.aspx

http://dba.stackexchange.com/questions/18535/sqlserver-rebuild-reorganize-fragmented-indexes-causes-crc-error-or-consistency

http://stackoverflow.com/questions/4673065/alter-database-failed-because-a-lock-could-not-be-placed-on-database

 

BugNet Database Diagram

1. BugNet Permission

2. BugNet ASPNET Membership

 

A More Efficient Method for Paging Through Large Result Sets

I. SQL Server 2008 paging methods
http://stackoverflow.com/questions/4358253/sql-server-2008-paging-methods

II. SQL Server 2005 Paging – The Holy Grail
http://www.sqlservercentral.com/articles/T-SQL/66030/

III.  Paging option 

1. ROW_NUNMBER()  mới trong SQL Server 2005
http://stackoverflow.com/questions/1078956/which-paging-method-sql-server-2008-for-best-performance
http://www.4guysfromrolla.com/articles/031506-1.aspx
2. SET ROWCOUNT  (Cách nhanh nhất với dữ liệu lớn)
http://www.4guysfromrolla.com/webtech/042606-1.shtml
3. CURSORS
4. Temporary table
5. Sub query
6. other

DECLARE @maxRow int;
SET @maxRow = 100;
SET ROWCOUNT @maxRow
SELECT e.*, d.[Name] as DepartmentName
FROM @TempItems t
INNER JOIN Employees e ON
e.EmployeeID = t.EmployeeID
INNER JOIN Departments d ON
d.DepartmentID = e.DepartmentID
WHERE ID >= @startRowIndex

SET ROWCOUNT 0 (very important)

GO

Generating INSERT statements in SQL Server

DECLARE @tableName VARCHAR(100)
SET @tableName =’T_DUNGDO’

DECLARE @MaCungXN VARCHAR(5) SET @MaCungXN= ‘717’
DECLARE @FK_BienSoXe VARCHAR(16) SET @FK_BienSoXe= ’30U4348′
DECLARE @TuNgay VARCHAR(20) SET @TuNgay = ‘2011-10-17’
DECLARE @DenNGay VARCHAR(20) SET @DenNGay = ‘2011-11-17’

–Declare a cursor to retrieve column specific information for the specified table
DECLARE cursCol CURSOR FAST_FORWARD FOR
SELECT column_name,data_type FROM information_schema.columns WHERE table_name = @tableName
OPEN cursCol
DECLARE @string NVARCHAR(3000) –for storing the first half of INSERT statement
DECLARE @stringData NVARCHAR(3000) –for storing the data (VALUES) related statement
DECLARE @dataType NVARCHAR(1000) –data types returned for respective columns
SET @string=’INSERT ‘+@tableName+'(‘
SET @stringData=”

DECLARE @colName NVARCHAR(50)

FETCH NEXT FROM cursCol INTO @colName,@dataType

IF @@fetch_status0
BEGIN
PRINT ‘Table ‘+@tableName+’ not found, processing skipped.’
CLOSE curscol
DEALLOCATE curscol
RETURN
END

WHILE @@FETCH_STATUS=0
BEGIN
IF @dataType IN (‘varchar’,’char’)
BEGIN
–SET @stringData=@stringData+””””’+isnull(‘+@colName+’,””)+”””,”+’
SET @stringData=@stringData+””+”’+isnull(””’+””’+’+@colName+’+””’+””’,”NULL”)+”,”+’
END
ELSE IF @dataType IN (‘nchar’,’nvarchar’)
BEGIN
SET @stringData=@stringData+”’N’+”’+isnull(””’+””’+’+@colName+’+””’+””’,”NULL”)+”,”+’
END
ELSE
IF @dataType IN (‘text’,’ntext’) –if the datatype is text or something else
BEGIN
SET @stringData=@stringData+””””’+isnull(cast(‘+@colName+’ as varchar(2000)),””)+”””,”+’
END
ELSE
IF @dataType = ‘money’ –because money doesn’t get converted from varchar implicitly
BEGIN
SET @stringData=@stringData+”’convert(money,”””+isnull(cast(‘+@colName+’ as varchar(200)),”0.0000”)+”””),”+’
END
ELSE
IF @dataType=’datetime’
BEGIN
–SET @stringData=@stringData+”’convert(datetime,”””+isnull(cast(‘+@colName+’ as varchar(200)),”0”)+”””),”+’
–SELECT ‘INSERT Authorizations(StatusDate) VALUES(‘+’convert(datetime,’+isnull(””+convert(varchar(200),StatusDate,121)+””,’NULL’)+’,121),)’ FROM Authorizations
–SET @stringData=@stringData+”’convert(money,”””+isnull(cast(‘+@colName+’ as varchar(200)),”0.0000”)+”””),”+’
SET @stringData=@stringData+”’convert(datetime,’+”’+isnull(””’+””’+convert(varchar(200),’+@colName+’,121)+””’+””’,”NULL”)+”,121),”+’
— ‘convert(datetime,’+isnull(””+convert(varchar(200),StatusDate,121)+””,’NULL’)+’,121),)’ FROM Authorizations
END
ELSE
IF @dataType=’image’
BEGIN
SET @stringData=@stringData+””””’+isnull(cast(convert(varbinary,’+@colName+’) as varchar(6)),”0”)+”””,”+’
END
ELSE –presuming the data type is int,bit,numeric,decimal
BEGIN
–SET @stringData=@stringData+””””’+isnull(cast(‘+@colName+’ as varchar(200)),”0”)+”””,”+’
–SET @stringData=@stringData+”’convert(datetime,’+”’+isnull(””’+””’+convert(varchar(200),’+@colName+’,121)+””’+””’,”NULL”)+”,121),”+’
SET @stringData=@stringData+””+”’+isnull(””’+””’+convert(varchar(200),’+@colName+’)+””’+””’,”NULL”)+”,”+’
END

SET @string=@string+@colName+’,’

FETCH NEXT FROM cursCol INTO @colName,@dataType
END
DECLARE @Query NVARCHAR(4000)

SET @query =’SELECT ”’+SUBSTRING(@string,0,LEN(@string)) + ‘) VALUES(”+ ‘ + SUBSTRING(@stringData,0,LEN(@stringData)-2)+”’+”)” FROM ‘+@tableName

–Them dieu kiem where
–SET @query = @query + ”WHERE MaCungXN=’717′ AND FK_BienSoXe=’30U4348′ AND ThoiDiemBatDauDung >=’2011-10-17 00:10:02.000′ AND ThoiDiemBatDauDung = ”’ + @TuNgay + ”’AND ThoiDiemBatDauDung <=''' + @DenNGay + ''''
–PRINT @query

EXEC sp_executesql @query
–select @query

CLOSE cursCol
DEALLOCATE cursCol

Chi tiết tại đây
http://www.codeproject.com/KB/database/InsertGeneratorPack.aspx