'SQL Server 2008'에 해당되는 글 25건

  1. 2008 R2 Service Pack 2 CTP
  2. SQL Server 2008, 2008 R2 진단 쿼리
  3. DML 변경 내용 추적하기
  4. DeadLock 감지 및 확인
  5. BACKUP LOG WITH TRUNCATE_ONLY - SQL2008에서 더이상 사용할 수 없는 기능 1
  6. Myths and misconceptions: 60-page PDF of blog posts
  7. server collation과 database collation을 한번에 바꾸기
  8. Optimising Server-Side Paging - Part I 1
  9. MSSQL의 잘못된 로우수 해결 방법
  10. T-SQL Debugger is back in SQL Server 2008 ... debugging now made easy

2008 R2 Service Pack 2 CTP

안녕하세요. 주디아줌마입니다.

2008 R2 Service Pack 2 CTP 버젼이 나왔네요. 정말 오랫동안 기다렸습니다. 후덜덜.. 곧 정식 버젼이 나오겠네요. 


Download

http://www.microsoft.com/en-us/download/details.aspx?id=29848

http://blogs.msdn.com/b/sqlreleaseservices/archive/2012/05/14/sql-server-2008-r2-service-pack-2-customer-technology-preview-available.aspx

Below are a few key customer requested updates in Microsoft SQL Server 2008 R2 SP2 CTP:

  • Reporting Services Charts Maybe Zoomed & Cropped
    Customers using Reporting Services on Windows 7 may sometime find charts are zoomed in and cropped.  To work around the issue some customers set ImageConsolidation to false.
  • Batch Containing Alter table not cached
    In certain situations with batch files containing the alter table command, the entire batch file is not cached.
  • Collapsing Cells or Rows, If Hidden Render Incorrectly
    Some customers who have hidden rows in their Reporting Services reports may have noticed rendering issues when cells or rows are collapsed.  When writing a hidden row, the Style attribute is opened to write a height attribute. If the attribute is empty and the width should not be zero.


DML 변경 내용 추적하기


기능

데이타베이스 감사를 이용한 특정 데이타베이스 DML 구문 추적 로그 생성



가) 감사 기능 설정

USE [master]

GO

 

-- 감사 비활성화

ALTER SERVER AUDIT [AWDMLAudit] WITH (STATE = OFF)

GO

 

-- 감사 삭제

DROP SERVER AUDIT [AWDMLAudit]

GO

 

-- 데이타베이스 감사 사양에서 사용할 서버 감사 생성

CREATE SERVER AUDIT [AWDMLAudit] 

TO FILE  

(  FILEPATH = 'D:\TEST' 

   ,MAXSIZE = 200 MB 

   ,MAX_ROLLOVER_FILES = 0 

   ,RESERVE_DISK_SPACE = OFF 

) 

WITH 

(  QUEUE_DELAY = 2000 

   ,ON_FAILURE = CONTINUE 

)  

GO

 

-- 감사 활성화

ALTER SERVER AUDIT [AWDMLAudit] WITH (STATE = ON)

GO

 

 

-----------------------------------------------------------------------

-- DML 활동을 감사할 데이타베이스 감사 사양 생성

-----------------------------------------------------------------------

CREATE DATABASE AuditTestDB

GO

 

use AuditTestDB

go

 

drop table dbo.tbltest

go

 

create table dbo.tbltest

(

             aa int

             , bb int

)

go

 

-- 기존에 존재하면 삭제

ALTER DATABASE AUDIT SPECIFICATION audittest1

WITH (STATE = OFF)

GO

 

DROP DATABASE AUDIT SPECIFICATION audittest1

GO

            

-- audit all updates in the Sales schema

CREATE DATABASE AUDIT SPECIFICATION audittest1

FOR SERVER AUDIT AWDMLAudit

ADD (INSERT , UPDATE, DELETE 

     ON SCHEMA::dbo

     BY PUBLIC)    

WITH (STATE = ON)

GO


나) 테스트

insert into dbo.tbltest values(1,1)

go

 

update dbo.tbltest

set

bb=2

where aa =1

go

 

 

-- 조회 쿼리

USE master

GO

-- get the audit file

DECLARE @filepattern VARCHAR(300)

DECLARE @folder VARCHAR(255)

DECLARE @auditguid VARCHAR(36)

SELECT @auditguid = audit_guid,@folder = log_file_path

FROM sys.server_file_audits WHERE name = 'AWDMLAudit'

 

SELECT @filepattern = @folder + '*_' + @auditguid + '*'

 

-- view the results

SELECT

             a.name AS Action

             ,event_time

             ,c.class_type_desc AS ObjectType,

             f.server_principal_name

             ,f.schema_name,f.OBJECT_NAME,f.statement

FROM fn_get_audit_file(@filepattern,NULL,NULL) AS f

JOIN sys.dm_audit_class_type_map c ON f.class_type = c.class_type

JOIN sys.dm_audit_actions a ON f.action_id = a.action_id

AND c.securable_class_desc = a.class_desc

WHERE f.action_id <> 'AUSC'

ORDER BY event_time DESC,sequence_number   

 

 
위에 보시는 것과 같이 어느 테이블에 언제, 누가 수정한 부분이 보입니다^^.

수고하세요^^
 

DeadLock 감지 및 확인


데드락 감지는 다양한 형태로 감지할 수 있다.
가. SQL Errorlog
나. 경고 시스템을 이용한 데드락  감시
다. SQL Server 프로파일러를 이용한 데드락 정보 추적. 

이번에는 경고 시스템을 이용한 데드락 감시 방법에 대한 설명을 하겠다.
작업할 내용 : 데드락 발생시 DBA라는 DB에 데드락 정보를 기록(SQL2008 이후 버젼에서만 사용 가능)

1. 데드락 감지를 위한 추적 플레그 확성화 체크
   T1204: 교착 상태에 있는 잠금의 유형과 리소스 및 현재 영향을 받은 명령을 반환.
   T1222: 교착 상태에 있는 잠금의 유형과 리소스 및 현재 영향을 받은 명령을 XSD 스키마에 맞지않는 XML형식으로 반환.
DBCC TRACEON (1222, 1204, -1);
go

 
 
2. 데드락 정보 경고 발생으로 알람 기능

가. Object 생성
------------------------------------------------------------------------------          
-- TABLE
------------------------------------------------------------------------------
CREATE TABLE [dbo].[TblAlert](
   [mIdx] [int] IDENTITY(1,1) NOT NULL,
   [mRegDate] [datetime] NOT NULL,
   [mType] [tinyint] NOT NULL,
   [DeadLocks] [xml] NULL,
   CONSTRAINT [PK_TblAlert] PRIMARY KEY CLUSTERED
   (
   [mIdx] DESC
   )
)
GO
ALTER TABLE [dbo].[TblAlert] ADD  CONSTRAINT [DF_TblAlert_mRegDate]  DEFAULT (getdate()) FOR [mRegDate]
GO
ALTER TABLE [dbo].[TblAlert] ADD  CONSTRAINT [DF_TblAlert_mType]  DEFAULT ((0)) FOR [mType]
GO
 
--------------------------------------------------------------------------------------       
-- PROCEDURE, FUNCTION
--------------------------------------------------------------------------------------
GO
CREATE PROCEDURE [dbo].[Usp_Ins_DeadLockCount]
AS
   DECLARE @DeadLocks XML;     
 
   SET @DeadLocks =
   (
    SELECT
    x.y.value('(@name)[1]', 'varchar(50)') [deadlock-list/deadlock/@eventname],
    x.y.value('(@timestamp)[1]', 'datetime') [deadlock-list/deadlock/@timestamp],
    CAST(x.y.value('data(.//data/value)[1]','VARCHAR(MAX)')  AS XML).value('deadlock[1]/victim-list[1]/victimProcess[1]/@id[1]','VARCHAR(15)') [deadlock-list/deadlock/@victim],
    CAST(x.y.value('data(.//data/value)[1]','VARCHAR(MAX)')  AS XML).query('deadlock/process-list/*') [deadlock-list/deadlock/process-list],
    CAST(x.y.value('data(.//data/value)[1]','VARCHAR(MAX)')  AS XML).query('deadlock/resource-list/*') [deadlock-list/deadlock/resource-list]
    FROM
    (
     SELECT
     CAST([target_data] as XML) [target_data]
     FROM sys.dm_xe_session_targets st
     INNER JOIN sys.dm_xe_sessions s ON s.[address] = st.[event_session_address]
     WHERE s.[name] = 'system_health'
    ) [deadlock]
    CROSS APPLY [target_data].nodes('/RingBufferTarget/event') AS x(y)
    WHERE x.y.query('.').exist('/event[@name="xml_deadlock_report"]') = 1
    FOR XML PATH(''),ROOT('root')
   )
   INSERT INTO [dbo].[TblAlert] (mType, DeadLocks)        
   SELECT
      1, x.y.query('.') [deadlocks]
   FROM @DeadLocks.nodes('/root/deadlock-list') AS x(y);
 
GO
CREATE FUNCTION [dbo].[UNF_DEADLOCK_VIEW]
(
   @deadlock XML
)
RETURNS TABLE
AS
RETURN
(
   SELECT
    [PagelockObject] = @deadlock.value('/deadlock-list[1]/deadlock[1]/resource-list[1]/pagelock[1]/@objectname', 'varchar(200)'),
    [DeadlockObject] = @deadlock.value('/deadlock-list[1]/deadlock[1]/resource-list[1]/objectlock[1]/@objectname', 'varchar(200)'),
    [KeylockObject] = Keylock.Process.value('@objectname', 'varchar(200)'),
    [Index] = Keylock.Process.value('@indexname', 'varchar(200)'),
    [IndexLockMode] = Keylock.Process.value('@mode', 'varchar(5)'),
    [Victim] = case
    when Deadlock.Process.value('@id', 'varchar(50)') = @deadlock.value('/deadlock-list[1]/deadlock[1]/@victim', 'varchar(50)') then 1 else 0 end,
    [Procedure] = Deadlock.Process.value('executionStack[1]/frame[1]/@procname[1]', 'varchar(200)'),
    [LockMode] = Deadlock.Process.value('@lockMode', 'char(1)'),
    [Code] = Deadlock.Process.value('executionStack[1]/frame[1]', 'varchar(1000)'),
    [ClientApp] = Deadlock.Process.value('@clientapp', 'varchar(100)'),
    [HostName] = Deadlock.Process.value('@hostname', 'varchar(20)'),
    [LoginName] = Deadlock.Process.value('@loginname', 'varchar(20)'),
    [TransactionTime] = Deadlock.Process.value('@lasttranstarted', 'datetime'),
    [InputBuffer] = Deadlock.Process.value('inputbuf[1]', 'varchar(1000)')
    from @deadlock.nodes('/deadlock-list/deadlock/process-list/process') as Deadlock(Process)
     LEFT JOIN @deadlock.nodes('/deadlock-list/deadlock/resource-list/keylock') as Keylock(Process)
      ON Keylock.Process.value('owner-list[1]/owner[1]/@id', 'varchar(50)') =
         Deadlock.Process.value('@id', 'varchar(50)')
);
 
GO
--------------------------------------------------------------------------------------       
-- 경고 및 JOB 추가
--------------------------------------------------------------------------------------
USE [msdb]
GO
 
EXEC sp_add_job @job_name = '[경고] 데드락발생'
EXEC sp_add_jobstep
   @job_name = '[경고] 데드락발생'
   , @step_id = 1
   , @step_name = '데드락발생'
   , @subsystem = 'TSQL'
   , @command = 'EXEC dbo.Usp_Ins_DeadLockCount'
   , @database_name = 'DBA'
EXEC sp_add_jobserver
   @job_name = '[경고] 데드락발생'
   , @server_name = N'(local)'
GO
EXEC sp_add_alert
   @name = '[경고] 데드락발생경고'
   , @message_id = 0
   , @severity = 0
   , @enabled = 1
   , @delay_between_responses = 0
   , @include_event_description_in = 0
   , @category_name = N'[Uncategorized]'
   , @performance_condition = N'SQLServer:Locks|Number of Deadlocks/sec|_Total|>|0'
   , @job_name = '[경고] 데드락발생'
GO






나. 데드락 발생
USE DBA
GO
 
Create table dbo.Cat
(
    Thing1 int not null
);
 
Create table dbo.Hat
(
    Thing2 int not null
);
 
insert into dbo.Cat
values(1);
 
insert into dbo.Hat
values(1);
 
go
 
 
-- sesssion 1
 
BEGIN TRANSACTION
 
UPDATE dbo.Hat
SET Thing2 = 1
 
-- session 2
BEGIN TRANSACTION
 
UPDATE dbo.Cat
SET Thing1 = 1
 
 
-- sesssion 1
UPDATE dbo.Cat
SET Thing1 = 1
 
 
-- session 2
UPDATE dbo.Hat
SET Thing2 = 1
 
다. 데드락 정보 확인
 SELECT
         *
FROM dbo.TblAlert T1
         CROSS APPLY dbo.UNF_DEADLOCK_VIEW( T1.DeadLocks ) T2 



백업로그가 확장되거나 로그가 꽉찰때 로그를 비워줄때 사용하는 옵션이 SQL2008이후 버젼에서는 더이상 사용할 수 없게 되었다.


use master
go

drop database shrinkdemo
go

CREATE DATABASE shrinkdemo 
ALTER DATABASE shrinkdemo SET RECOVERY FULL
go
 
sp_helpdb shrinkdemo
go

--name	fileid	filename	filegroup	size
--shrinkdemo	1	D:\DB\MSSQL\shrinkdemo.mdf	PRIMARY	4160 KB
--shrinkdemo_log	2	D:\DB\MSSQL\shrinkdemo_log.ldf	NULL	1040 KB

USE shrinkdemo
go
 
--※. log file 생성시 생성 크기에 따른 vlf count수

--Initial or Extended Size	Number of VLFs created
--<64MB	4
-->=64MB and < 1GB	8
-->=1GB	16

-- 1MB 이내임으로 VLF COUNT 체크
DBCC LOGINFO

--RecoveryUnitId	FileId	FileSize	StartOffset	FSeqNo	Status	Parity	CreateLSN
--0	2	262144	8192	34	2	64	0
--0	2	262144	270336	0	0	0	0
--0	2	262144	532480	0	0	0	0
--0	2	270336	794624	0	0	0	0
-- 4개이다.



CREATE TABLE biggie (a int, padding char(4000) default ' ')
go
 
INSERT biggie (a)
   SELECT a.object_id
   FROM   sys.objects a
   CROSS  JOIN sys.objects b
   CROSS  JOIN sys.objects c
go
 
EXEC sp_helpdb shrinkdemo
go
 
 --log file size 증가.
--name  size
--shrinkdemo    667904 KB
--shrinkdemo_log    770752 KB
 
 
-- 1MB 이내임으로 VLF COUNT 체크
DBCC LOGINFO
-- 총 32개 VLF 청크가 생겼다.
 
-- 복구 모델을 변경
ALTER DATABASE shrinkdemo SET RECOVERY SIMPLE
go
 
-- 로그를 잘라낸다.
DBCC SHRINKFILE(shrinkdemo_log)
go

--DbId	FileId	CurrentSize	MinimumSize	UsedPages	EstimatedPages
--23	2	130	130	128	128

-- 4개
DBCC LOGINFO
-- 다시 4개 생성
--RecoveryUnitId	FileId	FileSize	StartOffset	FSeqNo	Status	Parity	CreateLSN
--0	2	262144	8192	337	2	128	0
--0	2	262144	270336	35	0	64	0
--0	2	262144	532480	36	0	64	0
--0	2	270336	794624	37	0	64	0
 
EXEC sp_helpdb shrinkdemo
go

-- 축소되었다.(파일이 다시 1M로 되어 있으니 CHUNK가 4개로 생성)
--name  size
--shrinkdemo    667904 KB
--shrinkdemo_log    576 KB
 


-- 로그 파일을 적정 사이즈로 변경
ALTER DATABASE shrinkdemo 
MODIFY FILE
(
      NAME = shrinkdemo_log 
    , SIZE = 200MB
)
go

-- 복구 모델을 변경
ALTER DATABASE shrinkdemo SET RECOVERY FULL
go

-- 8개가 정상인데 왜? 12개야?
DBCC LOGINFO
go

-- 파일의 SIZE가 200MB인데 왜 12개의 VLF가 생성되는 건가? 
-- 최소로 축소 했을 때 4개였는데 흠....

--RecoveryUnitId	FileId	FileSize	StartOffset	FSeqNo	Status	Parity	CreateLSN
--0	2	262144	8192	337	2	128	0
--0	2	262144	270336	35	0	64	0
--0	2	262144	532480	36	0	64	0
--0	2	270336	794624	37	0	64	0
--0	2	26017792	1064960	0	0	0	337000000006400001
--0	2	26017792	27082752	0	0	0	337000000006400001
--0	2	26017792	53100544	0	0	0	337000000006400001
--0	2	26017792	79118336	0	0	0	337000000006400001
--0	2	26017792	105136128	0	0	0	337000000006400001
--0	2	26017792	131153920	0	0	0	337000000006400001
--0	2	26017792	157171712	0	0	0	337000000006400001
--0	2	26525696	183189504	0	0	0	337000000006400001



http://www.sqlskills.com/blogs/paul/CommonSQLServerMyths.pdf

좋은 자료 공유~~

김정선님 사이트에서 찾아왔어요^^



Back when I did the DBA-Myth-A-Day series in April, many people asked for the 30 blog posts to be collected up into a document for easy printing/reading/etc. I promised, but never got around to it, until I had an email from a blog reader, Peter Maloof, who had very kindly done all the work of producing a Word doc with everything collected together. I've prettified the doc and produced a PDF which you can print out and redistribute as you see fit.

You can download the PDF here: CommonSQLServerMyths.pdf (732.96 kb)

Enjoy!


- SQL Server 인스턴스에 있는 모든 Database를 백업 받습니다.
- 디스크 공간이 충분히 남아 있는지 확인합니다.
- Windows Server 2003이라면 command prompt, 2008이라면 admin command prompt를 엽니다.
- 다음의 명령을 실행합니다. 단, collation을 변경할 인스턴스의 sqlservr.exe라야 합니다.


sqlservr.exe -m -T4022 -T3659 -q"latin1_general_ci_as"

또는

sqlservr.exe -m -T4022 -T3659 -q"latin1_general_ci_as" -s 인스턴스이름


르메님꺼^^
http://purumae.tistory.com/ 

http://www.sqlservercentral.com/articles/paging/69892/

내용 괜찮네...
다시 확인해보자~~

Introduction

It is common to need to access rows from a table or result set one page at a time, perhaps for display purposes.This short series of articles explores a number of optimisations that can be applied to the basic technique of using the ROW_NUMBER ranking function to identify the rows to return for a requested page.

This series does not include an introduction to the core method, since there are a number of good articles already on SQL Server Central which do this. An excellent explanation can be found in this article by regular columnist Jacob Sebastian.

This first article takes a close look at an optimisation that is useful when paging through a wide data set. The next parts in the series will cover optimisations for returning the total number of pages available, and implementing custom ordering.

For layout reasons, the code samples in this article are rendered as images. A full test script with additional annotations can be found in the Resources section at the end of this article.

Paging through a wide data set

A wide data set is one which has a large average row size. Paging through a set like this presents some special challenges. To illustrate, this article uses a table that might form part of a design for an on-line discussion forum (like the ones here on this site).

The rows of the example table are potentially quite wide, since the body text of a posted message can use up to 2500 Unicode characters.

Sample Data

The following code creates the test table and populates it with 10,000 random rows.

The task is to return data from this table as a paged data set. The client will supply the page number required, and the number of rows per page.

Initial solution

One solution to this problem uses a common table expression (CTE) to number the rows, and an expression in the WHERE clause to return just the rows that correspond to the single page of data required. A TOP expression is used to allow SQL Server to stop looking through the table as soon as it has found the rows needed.

Running the above code on our test data produces an execution plan like this:

This is a simple and efficient-looking plan, but scanning the clustered index of a wide table can quickly become expensive. The following graph shows the number of physical reads incurred using this method, when reading a specified page number from our test table, with 50 rows per page.

Why scanning the clustered index can be inefficient

The cost of scanning the clustered index rises quickly due to the high average size of the data rows. This might seem counter-intuitive, since the clustered index key is very narrow - just 4 bytes for the integer post_id column.

While that is certainly true, the key just defines the logical order of data pages - the leaf level of the clustered index contains the entire data row, by definition. When SQL Server scans a clustered index in an ordered fashion, it follows a linked list of page ids. Since the linked list is found at the leaf level, entires are separated by the full width of the data row. This makes the clustered index the least-dense index possible.

Alternative Methods

Using a non-clustered index

One alternative to scanning the clustered index, is to create a covering non-clustered index. In this case, however, a covering index is not practical since it would essentially duplicate the clustered index.

We might instead consider creating a non-clustered index just on the post_id column, with a plan to use that index to quickly find the page required, and then look up the rest of the row data. The index is created using this statement:

This new index is very narrow (just 4 bytes per row at the leaf, plus index overhead), which ought to make finding the requested page very much more efficient. Sadly, running the query again with a forced index produces the following execution plan:

This plan is very similar to that generated for the clustered index, except now the query performs a bookmark lookup for every row, in order to fetch the columns not included in the index. This plan is less efficient than scanning the clustered index, so it seems that this is a step backward.

The problem is that SQL Server is fetching the off-index columns for every row it examines - not just the rows that will be eventually returned. The Key Lookup operator fetches the extra columns before the Sequence Project operator assigns a row number, and before the Filter has a chance to restrict the rows based on the assigned row number.

A much better execution plan would scan the non-clustered index, assign row numbers, filter the records, and only then use a Key Lookup to fetch the off-index columns. Since we know we will only be returning a maximum of 50 rows, it seems likely that the cost of 50 lookups would be more than offset by scanning the much narrower non-clustered index to begin with.

Unfortunately, SQL Server does not yet include logic to spot this sort of optimisation, and always places a Key Lookup just after the Index Seek or Scan it is associated with.

The 'Key Seek' Method

We can work around this limitation by rewriting the query as three logically separate steps:

  1. Start a partial scan of the non-clustered index in post_id order, assigning row numbers as we go.
  2. Filter those row numbers down to just the single page of rows we want.
  3. Use the 50 primary keys from step 2 to look up the off-index columns for the final output.

The following code implements these three steps in a single statement:

The query plan for this implementation is:

Notice that the Sequence Project and Filter operators now precede the Key Lookup in the plan, meaning that a maximum of 50 lookups will be performed. Because this method performs an index seek on filtered keys, it is referred to as the Key Seek method.

Testing and Results

Test environment

The tests were run on a single-core machine running SQL Server 2008 Developer Edition, version 10.0.2757. The code has also been tested on SQL Server Developer Edition, version 9.0.4285.

System caches, including the buffer pool (data cache) were fully cleared before each run, and the SQL Server read-ahead mechanism was disabled.

Clearing the buffer pool ensures that each data page required comes from disk, and disabling read-ahead ensures repeatable results on different editions of SQL Server. Early testing found that enabling read-ahead tended to disadvantage the clustered index scanning method, since SQL Server frequently read more pages than were ultimately required, resulting in an artificially high number of reads.

The purpose of clearing the cache and disabling read-ahead is to produce clear and consistent test results. Do not run the scripts included in this article on anything other than a personal or dedicated test server.

All tests were run on the same 10,000 row data set, using 50 rows per page, and requesting page numbers 1, 10, 50, 100, and 200 (the last page).

The data concerning physical reads, logical reads, CPU time, and elapsed time were obtained from the sys.dm_exec_query_stats dynamic management view, and validated against Profiler output and from runs with SET STATISTICS IO, TIME ON. Buffer pool usage was determined from sys.dm_os_buffer_descriptors.

The full test script is included in the Resources section at the end of this article.

Physical reads

Although the query plan for the Key Seek method looks a little more complex than the clustered index scan version, it is surprisingly efficient, with consistent performance across the whole range of tested page numbers. The following graph shows the physical reads incurred by the Key Seek method, in comparison to the clustered index scan.

The Key Seek method ranges from 6 to 10 physical reads, whereas the clustered scan method uses between 5 and 694 physical reads.

Confirmation of the small number of data pages needed by the Key Seek method can be found later in this article, in the analysis of Buffer Pool memory usage. That analysis also shows that a each physical read was able to load multiple 8KB data pages.

Logical reads

Logical reads are the sum of read operations on pages already in the data cache, plus any physical reads from external storage. Operations on a cached page are very fast - many thousands of times faster than a physical read - but we must nevertheless account for them. The following graph shows a comparison of logical reads for the two methods:

Again, the Key Seek method is very consistent, though it does perform more logical reads than the clustered index scan for the first few pages. Since we already know how many physical reads were required, it is clear that very nearly all of the logical reads are reads from data cache.

CPU and total elapsed time

Total worker time is generally slightly higher for the Key Seek method, due to the extra logical operations performed. As in the other tests, the difference is most pronounced for low numbered pages.

The tests for elapsed time show the Key Seek ahead in all cases - emphasising the dominant contribution of physical I/O.

Memory Usage

This test measures the number of 8KB index and data pages brought into the buffer pool by each method.

The clustered index scan consumes very slightly less memory only for the very first page. After the first page, the Key Seek method has a smaller buffer pool requirement.

This result confirms the observations from the physical read test, showing that the Key Seek method touches a consistently small number of data pages, and that many data pages can be fetched with a single physical read.

Conclusion

The Key Seek method provides a fast and efficient way to implement server-side paging when the source row data is relatively wide. The method gives predictable and consistent performance regardless of the page number requested, while making efficient use of system resources.

Very high-volume applications that frequently access the first few pages of a result set might consider combining the two methods presented here - using a clustered index seek for the first few pages, and switching to the Key Seek method for all other requests.

A comprehensive test script containing all the code samples from this article, together with additional comments and annotations can be found below.

Resources:

Optimising Server-Side Paging Part I.sql
http://www.sqlhacks.com/Administration/Row-Count-Inaccurate


DBCC updateusage (0) WITH count_rows;
go

  • If nothing is returned then the usage/indexes statistics are up-to-date.
  • Line 3: dbcc updateusage (0) updates the current database.
  • Line 3: with count_rows means that the row count column is also updated.
  • You need to have admin privileges to the database.

  •  


    참고 링크 >>
    http://www.mssqltips.com/tip.asp?tip=1695


    SQL2008의 Debugging 기능이 많이 좋아졌네요..
    C++에서 Debugging이 가능하네요^^


    디버깅 방법

    1) F9으로 디버깅할 위치를 선택 한다.
    2) ALT+F5를 실행한다.
    3) 키 설정에 따라 다르지만, F10으로 순차적으로 이동 한다.
    4) 프로시져 안에서 프로시져로 이동하고자 한다면, F11으로 이동