Just another WordPress.com site
Monthly Archives: August 2012
August 6, 2012Posted by on
We have an audit application which database uses service broker to receive the audit messages. The system handling a large number of audit messages per second from each device and as time grows more device were registered to application and in the meantime the message transfer rate to broker also grown exponentially.
Dealing with an application with high volume of message, the service broker queue can grow a very large. Reading or deleting messages from the queues also showing a large number, so obviously there could be high number of fragmentations can present to these queues table which may slow down the read and delete operation to the queue.
We also have noticed that sometimes messages receive processing hit a threshold from where it could not recover. The queue processing rate slow down below the incoming rate, as a result queue start growing.
Queues are hidden tables and there is neither ALTER QUEUE … REBUILD nor ALTER QUEUE … REORGANIZE statement can be execute to these hidden tables. But if we login as a DAC to the SQL server then we will able to view these hidden tables .Once we are connected by DAC the following query returns the name of the internal table that backs each Service Broker queue in the database.
Great!!! with DAC connection we can reindex Service Broker queue internal table.
Here is the scripts we use in our environment to rebuild the queues internal tables. We rebuild only the queues has more than 200 rows. But this number can be change depending on the environment.
August 3, 2012Posted by on
Recently we restored a database to a test environment provided by a customer for schema upgrade. Database was large in size and schema change will generate a large log so we store the log file in large drive but the log file growth was so large that log file run out of the disk space as a result the upgrade process was failed. Database is SIMPLE recovery and schema upgrade is planned to run in batches model. So after each batch transactions log file should be truncated and log file will go back to initial size. But log file was continuously grown after each batch schema modification. Database is large in size also in test environment so no backup has taken to this database.
The scripts changes schema experienced these messages once the process is failed.
/* — Command Failed — */
The transaction log for database ‘XXXXX’ is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases.
The log_resuse_wait_desc column value in sys.databases dmv is showing NOTHING.
We would like to verify whether the database recovery model has changed after restore so we focus our investigation to error logs and trace files.
There is a very good tutorial by SIMON how to find out the when a Database recovery model has changed. We modified the script from him blogs for our need to find more details about database recovery modification.
set nocount on
declare @searchString1 varchar(255)
declare @searchString2 varchar(255)
set @searchString1 = ‘RECOVERY’
set @searchString2 = ‘OPTION’
declare @logs table (LogNo int, StartDate Datetime, FileSize int)
declare @results table (LogFileNo int, LogDate Datetime, ProcessInfo varchar(20),Text varchar(max))
insert into @logs EXEC master..sp_enumerrorlogs
declare cLogs cursor for select LogNo from @logs
declare @LogNo int
fetch cLogs into @LogNo
while @@fetch_status =0
insert into @results(LogDate, ProcessInfo, Text)
EXEC sp_readerrorlog @LogNo, 1, @searchString1,@searchString2
update @results set LogFileNo = @LogNo where LogFileNo is null
fetch cLogs into @LogNo
select * from @results
from fn_trace_gettable(‘xxxxxxxxxxxxxxxxxxxxxxxxxx’,null) t
join @results r on t.StartTime between dateadd(ms,-150,r.logDate) and dateadd(ms,150,r.logdate)
and t.spid = substring(r.ProcessInfo,5,10) –required to enable a hash join to be used
where t.EventClass = 164
and EventsubClass = 1
set statistics time off
–Use of a temp table results in a nested loo[ join but also a spool
The result we found from the above query.
The result of the query gave us some fact that database was full recovery model when restored and we changed the recovery model to SIMPLE on some time period showing in StartTime column. But database was never taken any full backup after change recovery model. Due to this fact the database was generating logs to transaction log file as in full recovery model.
We take full backup to the database and restart the schema modification scripts and this time the schema changes are completed successfully.