'분류 전체보기'에 해당되는 글 684건

  1. mstsc 옵션 변경
  2. 안정적인 데이터베이스 구축과 장애복구 전략
  3. SQL 책 추천
  4. Why you want to be restrictive with shrink of database files
  5. [설계] High Availability Planning(8)
  6. Accelerated SQL Server 2008
  7. Inside Microsoft SQL 2005.TheStorage.chm
  8. output 의 사용
  9. 델서버 + 윈도서버2008 + 시퀄2008
  10. 안정적인 데이타베이스 구축과 장애 복구 전략

mstsc 옵션 변경


mstsc 6.01 부터는 console 옵션이 admin 옵션으로 변경
mstsc /console -> mstsc /admin으로 사용하시면 동일하게 원격 연결을 할 수 있다^^

http://nexondbteam.tistory.com/trackback/37

SQL 책 추천

Enjoy Another Sandwich -- Kent Tegels

Yummy slices of SQL Server between slices of .NET and XML

Recommended readings: 2008 update

In class this week, I got asked a few times about the technical books I'd recommed for fulks interested in the data space. While I've had a list up on Amazon for while, I thought I'd post an updated list here.

http://sqlblog.com/blogs/kent_tegels/archive/2008/07/25/8044.aspx



SQL Server

  • A Developer's Guide to SQL Server 2005: Bob Beauchemin, Dan Sullivan
  • Accelerated SQL Server 2008: Rob Walters
  • Applied Microsoft Analysis Services 2005: And Microsoft Business Intelligence Platform: Teo Lachev
  • Dissecting SQL Server Execution Plans: Grant Fritchey
  • Expert SQL Server 2005 Integration Services: Brian Knight, Erik Veerman
  • Inside Microsoft SQL Server 2005: Query Tuning and Optimization: Kalen Delaney, Sunil Agarwal, Craig Freedman, Ron Talmage, Adam Machanic
  • MCITP Developer: Microsoft SQL Server 2005 Database Sulutions Design: Victor Isakov
  • Pro SQL Server 2005 Service Broker: Klaus Aschenbrenner
  • Professional SQL Server 2005 XML: Scott Klein
  • SQL Server 2005 Practical Troubleshooting: The Database Engine: Ken Henderson
  • The Rational Guide to SQL Server 2005 Service Broker : Roger Wulter
Overview
Introduced in SQL Server 7.0 was the ability automatically grow and to shrink the physical size of database data and transaction log files. Auto grow of files doesn't happen before the file is full, it happens when new space is needed (like when an insert is performed), so the user will wait the time it takes to grow until the modification is completed.

Auto grow and shrink can be very useful under special circumstances, for example after archiving data to some other location. However, we often see DBA's doing shrink on a regular basis and the purpose of this article is to explain some of the downsides of shrink and what actually happens when you shrink a database file. Also, it is worth noticing that the auto grow functionality was mainly added so the dba wouldn't be paged in the middle of the night because the database is full. It was never intended to be a high performance feature or to replace the need to manage space usage, especially at the high end

Acknowledgements
I like to thank SQL Server MVP Mark Allison,
http://www.markallison.co.uk, who has provided valuable suggestions and input for this article.

Versions of SQL Server
This article applies to SQL Server 7.0, 2000, 2005 and 2008, where not noted otherwise.

More information
You can shrink a database file using either DBCC SHRINKDATABASE (which targets all files for the database) or DBCC SHRINKFILE (which targets a particular database file). I prefer SHRINKFILE. I will not go through the details of the commands here; they are documented in SQL Server Books Online. Let us first determine what actually happens when you shrink a database file:

Shrinking of data file
When you shrink a data file, SQL Server will first move pages towards the beginning of the file. This frees up space at the end of the file and the file can then be shrunk (or as I like to view it: "cut off at the end").

Management of transaction log file
This section is only to serve as a brief introduction to the topic of transaction log file management. See below for reference.
Each modification performed in the database is reflected in the transaction log file. Needless to say, these log records need to be removed (or actually: overwritten) sooner or later - or else we would end up with a huge log file or a full log file.
One way is to set the database to simple recovery model. This mean that SQL Server will automatically truncate the log (or, as I like to call it: "empty the log").
Another option is to have the database in full or bulk_logged recovery model and perform regular transaction log backups (BACKUP LOG). The transaction log is emptied when you backup the log. Note that the log is *not* emptied for other backup types (like BACKUP DATABASE).
See the SQL Server 2005 Books Online reference
http://msdn.microsoft.com/en-us/library/ms345583.aspx for details. Make sure you read all the subsections. It is worth the 30 minutes of your time to understand how to manage the transaction log. Trust me on this.

Shrinking of transaction log file
SQL Server cannot move log records from the end of the log file toward the beginning of the log file. This means that SQL Server can only cut down the file size if the file is empty at the end of the file. The end-most log record sets the limit of how much the transaction log can be shrunk. A transaction log file is shrunk in units of Virtual Log Files (VLF). You can see the VLF layout using the undocumented DBCC LOGINFO command, which returns one row per virtual log file for the database:

DBCC LOGINFO('myDatabase')
FileId FileSize StartOffset FSeqNo Status Parity CreateLSN
2 253952 8192 11 0 128 0
2 253952 262144 13 0 128 0
2 270336 516096 12 0 128 7000000025000288
2 262144 786432 14 2 128 9000000008400246

The interesting column is "Status". 0 means that the VLF is not in use and 2 means that it is in use. In my example, I have 2 at the end of the file (read result from top to bottom) and this means that the file cannot currently be shrunk.
In 7.0, you have to generate dummy transactions so that the usage of the log file wraps toward the beginning of the file. You can then empty the log file using BACKUP LOG and then shrink the file.
As of SQL Server 2000, the generation of dummy log records is done for you when you execute the DBCC SHRINKFILE command.
What you end up doing is BACKUP LOG, DBCC SHRINKFILE several times. Investigate the layout of the log file using DBCC LOGINFO in between.

If you have loads of VLF (many rows returned from DBCC LOGINFO), you probably had a small file size for the log initially and then had lots of small autogrow. Having lots of VLF is a bit of a performance hit. If this is your case, consider shrinking the log file to a very small size and then expand the file size to something comfortable (a bigger size).
Here are some articles specifically about management of log file size:
How to Shrink the SQL Server 7.0 Transaction Log
Shrinking the Transaction Log in SQL Server 2000 with DBCC SHRINKFILE
Log File Grows too big
Log file filling up
Considerations for Autogrow and AutoShrink

So what is the problem? Why shouldn't I shrink database files on a regular basis?
Have a look at below list and then you can determine for yourself whether or not you want to shrink database files regularly:
  • Each page moved will be logged to the transaction log.
    Say you have a database with 50GB used space (data and indexes pages), and the shrink will shuffle 40 GB towards the beginning of the file. The log file will for this shrink operation need 40GB, likely auto-grow to that size (of you don't have 40GB free space in the log file already). The following log backup wll be 40GB in size, plus the "regular" log records. This doesn't seem to happen if the database is in simple recovery mode, possibly beacuse CHECKPOINT will prune the log regurarly during the shrink.
    (Applies to shrinking of data files.)

  • After the shrink, as users add rows etc in the database, the file has to grow again.
    Growing a database file is an expensive operation, it takes time and it also hurts performance (lots of resource usage). Also, some modifications will be blocked until the grow operation has finished.
    (Applies to shrinking of both data and log files.)

    SQL Server 2005 and later:
    As of SQL Server 2005, we have "instant file initialization" which means that database files can be created and also grow very fast since Windows doesn't "zero out" the data in the database file. Instant file initialization is only available for data files, not log files. Also, instance file initialization requires that the service account for the SQL Server service has the SE_MANAGE_VOLUME_NAME windows privilige, which can be assigned using the Perform Volume Maintenance Tasks security policy. This is by default only granted to Administrators.

  • There are situations where autogrow doesn't "catch up" with the space usage requirements.
    This will result in an error message from SQL Server when the modification is performed, returned to the client application: error 1105 if data is full and 9002 if log is full.
    (Applies to shrinking of both data and log files.)

  • Moving datapages around will fragment your database.
    Say you rebuild your indexes (which will requires free space in the database), and then shrink the database. The shrink will essentially undo the index rebuild, leaving you with fragmented indexes. Don't believe me? This is easy to test for yourself.
    What if you do it the other way around, shrink first, then rebuild? Well, the rebuld need free space in the database for the largest index that you rebuild, and it is likely you have a large table with a clustered index. A friend of mine had a 4GB used space db, where almost all space was one 4GB table. He did a shrink and then rebuild, where the re-build immediately "bumped up" the db size to 8GB.
    (Applies to shrinking of data files.)

  • Heavy shrinking and growing of database files will fragment your file system, which will further hurt performance.
    (Applies to shrinking of both data and log files.)

Conclusion
In the end, only you can determine whether you want to shrink or not. With above information, you hopefully have enough to go on when making that decision. You probably know by now what I think of regular shrinking. Not to talk about autoshrink. :-)


Transact-SQL code
Below you find the T-SQL code I ran to prove that shrinking of a database will generate the same amount of transaction log records as the amount of data which was moved:

--Script to show that shrink produces a lot of log record.
SET NOCOUNT ON
USE
master
IF DB_ID('shrink_test') IS NOT NULL
DROP DATABASE shrink_test
GO
CREATE DATABASE shrink_test
ON PRIMARY 
(NAME = shrink_test_data,
FILENAME = N'c:\shrink_test_data.mdf',
SIZE = 5MB,
MAXSIZE = 200MB,
FILEGROWTH = 10%)
LOG ON 
(NAME = shrink_test_log,
FILENAME = N'c:\shrink_test_log.ldf',
SIZE = 3MB,
MAXSIZE = 200MB,
FILEGROWTH = 10%)
GO

USE shrink_test
CREATE TABLE t
(c1 int identity CONSTRAINT PK_shrink_test PRIMARY KEY NONCLUSTERED, c2 char(3000) default 'hello')

DECLARE @i int
SET
@i = 1
WHILE @i <= 40000 
BEGIN
INSERT INTO
t DEFAULT VALUES
IF
@i%100 = 0 --Truncate log on every 100'th row
BACKUP LOG shrink_test WITH NO_LOG
SET @i = @i + 1
END
SELECT
COUNT(c1) AS "Number of rows before delete, should be 40000"FROM
GO

--Delete some rows, in a loop so log doesn't grow!
DECLARE @i int
SET
@i = 1
WHILE @i <= 40000 
BEGIN
IF
@i <= 20000
DELETE FROM t WHERE c1 = @i
SET @i = @i + 1
END
SELECT
COUNT(c1) AS "Number of rows after delete, shuld be 20000"FROM

--Make sure the tlog file doesn't auto-truncate
BACKUP DATABASE shrink_test TO DISK = 'NUL'

--Database file should be large, logfile small
SELECT name, size*8192/(1024*1024) AS sizeInMB FROM sysfiles
GO

--This shrink might now produce a lot of log record as 20000 rows will be moved!
CHECKPOINT
DBCC
SHRINKFILE (shrink_test_data, 40)

--Database file should now be small, but logfile large
SELECT name, size*8192/(1024*1024) AS sizeInMB FROM shrink_test..sysfiles

--My result, the data file shrunk to 80MB and the log file grew from 3MB to 146MB!!!
GO
USE master
GO
DROP DATABASE shrink_test
http://sqler.pe.kr/web_board/view_list.asp?id=1195&read=1163&pagec=&ses=7

요즘 제 최대의 관심사인 시스템 아키텍처 부분입니다.

시스템을 작게는 1~2년 길게는 수년 이상을 운영 해야 하는 부분에서 서비스가 성장 하거나 할 때 유용하게
대처 할 수 있어야 할 것 같은데 개발, 운영, 유지보수, 데이터 관리, 데이터 분석, 성능 이슈가 골고루 벨런싱을
유지하면서 운영 한다는 것이 참으로 어려운 것이 현실입니다.

적어도 몇 개국 이상을 7x24 라 불리는 non-stop 으로 서비스 하고, 다양한 환경에 유연하게 대응 하면서 성능,
데이터 관리까지 함께 하려면 생각 해야 하는 것들이 한 두 가지가 아닐 것입니다. 특히나 운영 dB의 데이터는
살아 움직이는 유기체와 같아서, 사회적 이슈나 서비스 하는 서비스상의 이슈 등의 변화에 대응 하면서 고심을
해본 분들이 많이 계실 것입니다. 특히나 필드에 수년 이상을 서비스 해오신 분들께서는 이러한 고충을 많이
느낄 것으로 생각 하십니다.

일전에 SQLER 지인으로부터 큰 모(?) 회사들의 시스템 아키텍처에 관련한 이야기를 들으면서, 많은걸 배웠습니다.
몇 달 전 서핑 중에 찾아낸 자료 인데, 혹시나 해서 국내 블로그를 뒤져보니 SQLER Leader 에 계시는 한대성
박노철님 블로그에서 (http://merritt.co.kr) 이미 소개를 해주셨네요.

* 블로그이 주인을 잘 못 알았습니다. 한대성 -> 박노철 님으로 정정 합니다.

The eBay Architecture 입니다. 예를 든 DBMS가 오라클이긴 하지만 이건 DBMS 의 프러턱드 문제는 아니라고 생각
합니다.  결국은 데이터의 라이프사이클, 활용도를 다양하게 고려해서 이 데이터들이 어떻게 활용 되고 움직이는지
예측하고 유연하게 대응 할 수 있도록 설계를 하는 것이 중요한 것이라 예를 든 MS-SQL 이나 Oracle 이냐의 제품은
크게 상관 없을 것입니다.


eBay 케이스 말고 이와 관련한 몇 가지 유용한 문서를 함께 첨부 합니다.

1. The eBay architecture : http://pds9.egloos.com/pds/200808/08/70/eBaySDForum.pdf
2. SQL High Availability Planning : http://pds6.egloos.com/pds/200808/08/70/SQL_Scaling.zip

* 자료는 SQLER 업로드 제한으로 인하여 자료를 제 블로그에 링크 해두었습니다
  행여 링크 안되시는 분은 wowhoon 골뱅이 넷츠고 쩜 컴 (netsgo.com) 메신저나 메일 주시면
  보내 드리겠습니다. ^^

Accelerated SQL Server 2008

김도현 대리님께서 공유한 자료^^
벌서 sql2008 책이 나온겨~~

'SQL Server 2008' 카테고리의 다른 글

GROUPING SETS  (0) 2008.11.05
SQL2008 설치하기  (0) 2008.10.07
델서버 + 윈도서버2008 + 시퀄2008  (0) 2008.07.21
WPD_Worth_the_Wait_013108  (0) 2008.07.10
SQL 2008 테스트 환경 구축  (0) 2008.07.01
아자뵤.. 중요한 자료다~ 공부하장.ㅋ

'SQL Server 2005' 카테고리의 다른 글

SQL 책 추천  (0) 2008.08.26
Why you want to be restrictive with shrink of database files  (0) 2008.08.22
output 의 사용  (0) 2008.07.22
안정적인 데이타베이스 구축과 장애 복구 전략  (0) 2008.07.16
DSN ADD VB Script  (0) 2008.07.15

output 의 사용

북스온라인 보면 잘 나오고 어딘가 질문이 있어서 하나 만들어 둡니다.

OUTPUT 절의 활용

if object_id ('Seat') is not null
drop table Seat
go
if object_id ('reservedSeat') is not null
drop table reservedSeat
go


create table Seat
( seatNo int
, seatRow char(1)
, isReserved bit
)
go

set nocount on
begin tran

insert into Seat values (1,'a',0)
insert into Seat values (2,'a',0)
insert into Seat values (3,'a',0)
insert into Seat values (4,'a',0)
insert into Seat values (5,'a',0)
commit tran

go

create table reservedSeat
(reservedNo int
,seatNo int
,seatRow char(1)
)
go

UPDATE SEAT
 SET ISRESERVED = 1
 OUTPUT INSERTED.SEATNO
  , INSERTED.SEATNO
  , INSERTED.SEATROW
 INTO RESERVEDSEAT
WHERE
 SEATNO IN (1,2,3)
 AND ISRESERVED <> 1
GO

SELECT @@ROWCOUNT
GO

-- 반드시 적용된 개수를 확인해서 COMMIT 과  ROLLBACK 처리를 해주어야 합니다.

SELECT * FROM SEAT
SELECT * FROM RESERVEDSEAT
왕눈봄이(minsouk1)
델서버 + 윈도서버2008 + 시퀄2008

[김도현|金稻顯] 13341193412님의 말:
클러스터링쪽은 상당히 바뀐듯^ <-- 오호라~~

<-- 고가용성 부분

'SQL Server 2008' 카테고리의 다른 글

GROUPING SETS  (0) 2008.11.05
SQL2008 설치하기  (0) 2008.10.07
Accelerated SQL Server 2008  (0) 2008.08.07
WPD_Worth_the_Wait_013108  (0) 2008.07.10
SQL 2008 테스트 환경 구축  (0) 2008.07.01
7월 4일 Offline SQL Seminor

script & doc

1) mssqlsystemresource 데이터베이스 내용 확인
CREATE DATABASE [mssqlsystemresource_Copy] ON
( FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\                      Data\mssqlsystemresource_Copy.mdf' ),
( FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\                      Data\mssqlsystemresource_Copy.ldf' ) FOR ATTACH

sql db folder의 bin 디렉토리에서 내용 확인가능하다. 근디,, 시스템 테이블을 조회할 수가 없네, 어찌 해야하는 걸까? 고민좀 해봐야겟당.ㅋㅋ

2) 데이타베이스 기본 생성 폴더 지정 변경
나도 참으로 많이 고민하였다. 어떻게 변경해야 할쥐~
EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'DefaultData', REG_SZ, N'C:\Data‘
EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'DefaultLog', REG_SZ, N'C:\Data‘