Just another WordPress.com site
February 13, 2013Posted by on
SQL server agent- agent Xps disabled
There is some interesting observation regarding this error. If we have seen this error we may check this following
1. Check whether xps is disabled?
We will run the query to see if the value is set to 1
select name,value from sys.configurations where name= ‘Agent XPs’
2. If the value is set to 0 then we will execute the following statements to enable the configuration.
sp_configure ‘show advanced options’,1
reconfigure with override
sp_configure ‘Agent XPs’,1
reconfigure with override
sp_configure ‘show advanced options’,0
reconfigure with override
3. Restart SQL Agent.
Even though we followed the above steps still agent is showing disable and not able to browse the agent.
Here is some interesting observation.
- Checking the error log file, we found there is a process “SQLAgent –Email loader” change the value to 0 after SQL agent restart.
- Verified any store procedures or anything running during agent restart and nothing is found.
- Check the SQL Service startup parameter and found T3608 is added.
This trace flag will bypass recovery for all databases including system databases except master database so the change values not reflect when agent startup and agent values are goes back to default value which is 0.
- Remove this trace flag from startup parameter and restart the agent. Once SQL agent restart is completed the disable message has disappeared and we were able to browse the agent.
Please check any startup parameter for troubleshooting any SQL server issue.
September 19, 2012Posted by on
We have several databases and we need to refresh only some Schemas not the entire database periodically. Objects of the schema are not changed that frequent so schema refresh only deal with data. Some of the tables in the schema are extremely large and deleting the rows from these tables will take a very long time and the delete operation will generate large log file. Best solution would be to truncate all tables of a respective schema and insert data from source tables. We decide to follow the steps describing below for schema refresh.
- Scripts out all existing foreign key constraints and indexed views to all tables in a particular schema to a temp table.
- This temp table will contain scripts to drop schema, create schema and truncate table command.
- Truncate table operation will be failed if that table is reference to a foreign key constraints or indexed view. So we need to drop these constraints or indexed view if there anything present.
- Drop any merge or transactional replication setting to the table. Fortunately we don’t have any replication setting to our tables. Replication setting will fail the table TRUNCATION.
There is a great post by Greg Robidoux how to scripts out the Enable, Disable, Drop and Recreate FKs based on Primary Key table. We are using this script and modified a portion of the script for our requirement. I added the modified version of the scripts in this blog.DECLARE @tableName sysname DECLARE @schemaName sysname DECLARE @ID INT SET @schemaName = ‘XXX’ DECLARE @cmd NVARCHAR(1000) DECLARE @sql_DROP nvarchar(150) DECLARE @sql_Trunc nvarchar(150) DECLARE @sql_Create nvarchar(500) IF OBJECT_ID(‘tempdb..#FkTable’) IS NOT NULL DROP TABLE #FkTable CREATE TABLE #FkTable ( ID INT IDENTITY(1,1), sql_DROP nvarchar(150), sql_Trunc nvarchar(150), sql_Create nvarchar(500) ) DECLARE @FK_NAME sysname, @FK_OBJECTID INT, @FK_DISABLED INT, @FK_NOT_FOR_REPLICATION INT, @DELETE_RULE smallint, @UPDATE_RULE smallint, @FKTABLE_NAME sysname, @FKTABLE_OWNER sysname, @PKTABLE_NAME sysname, @PKTABLE_OWNER sysname, @FKCOLUMN_NAME sysname, @PKCOLUMN_NAME sysname, @CONSTRAINT_COLID INT DECLARE cursor_fkeys CURSOR FOR SELECT Fk.name, Fk.OBJECT_ID, Fk.is_disabled, Fk.is_not_for_replication, Fk.delete_referential_action, Fk.update_referential_action, OBJECT_NAME(Fk.parent_object_id) AS Fk_table_name, schema_name(Fk.schema_id) AS Fk_table_schema, TbR.name AS Pk_table_name, schema_name(TbR.schema_id) Pk_table_schema FROM sys.foreign_keys Fk LEFT OUTER JOIN sys.tables TbR ON TbR.OBJECT_ID = Fk.referenced_object_id –inner join WHERE schema_name(TbR.schema_id) = @schemaName OPEN cursor_fkeys FETCH NEXT FROM cursor_fkeys INTO @FK_NAME,@FK_OBJECTID, @FK_DISABLED, @FK_NOT_FOR_REPLICATION, @DELETE_RULE, @UPDATE_RULE, @FKTABLE_NAME, @FKTABLE_OWNER, @PKTABLE_NAME, @PKTABLE_OWNER WHILE @@FETCH_STATUS = 0 BEGIN – create statement for dropping FK and also for recreating FK – drop statement SET @sql_DROP = ‘ALTER TABLE [‘ + @FKTABLE_OWNER + ‘].[‘ + @FKTABLE_NAME + ‘] DROP CONSTRAINT [‘ + @FK_NAME + ‘]’ SET @sql_Trunc = ‘TRUNCATE TABLE [‘ + @FKTABLE_OWNER + ‘].[‘ + @FKTABLE_NAME + ‘]’ — PRINT @sql_DROP – create process DECLARE @FKCOLUMNS VARCHAR(1000), @PKCOLUMNS VARCHAR(1000), @COUNTER INT – create cursor to get FK columns DECLARE cursor_fkeyCols CURSOR FOR SELECT COL_NAME(Fk.parent_object_id, Fk_Cl.parent_column_id) AS Fk_col_name, COL_NAME(Fk.referenced_object_id, Fk_Cl.referenced_column_id) AS Pk_col_name FROM sys.foreign_keys Fk LEFT OUTER JOIN sys.tables TbR ON TbR.OBJECT_ID = Fk.referenced_object_id INNER JOIN sys.foreign_key_columns Fk_Cl ON Fk_Cl.constraint_object_id = Fk.OBJECT_ID WHERE TbR.name = @PKTABLE_NAME AND schema_name(TbR.schema_id) = @schemaName AND Fk_Cl.constraint_object_id = @FK_OBJECTID — added 6/12/2008 ORDER BY Fk_Cl.constraint_column_id OPEN cursor_fkeyCols FETCH NEXT FROM cursor_fkeyCols INTO @FKCOLUMN_NAME,@PKCOLUMN_NAME SET @COUNTER = 1 SET @FKCOLUMNS = ” SET @PKCOLUMNS = ” WHILE @@FETCH_STATUS = 0 BEGIN IF @COUNTER > 1 BEGIN SET @FKCOLUMNS = @FKCOLUMNS + ‘,’ SET @PKCOLUMNS = @PKCOLUMNS + ‘,’ END SET @FKCOLUMNS = @FKCOLUMNS + ‘[‘ + @FKCOLUMN_NAME + ‘]’ SET @PKCOLUMNS = @PKCOLUMNS + ‘[‘ + @PKCOLUMN_NAME + ‘]’ SET @COUNTER = @COUNTER + 1 FETCH NEXT FROM cursor_fkeyCols INTO @FKCOLUMN_NAME,@PKCOLUMN_NAME END CLOSE cursor_fkeyCols DEALLOCATE cursor_fkeyCols – generate create FK statement SET @sql_Create = ‘ALTER TABLE [‘ + @FKTABLE_OWNER + ‘].[‘ + @FKTABLE_NAME + ‘] WITH ‘ + CASE @FK_DISABLED WHEN 0 THEN ‘ CHECK ‘ WHEN 1 THEN ‘ NOCHECK ‘ END + ‘ ADD CONSTRAINT [‘ + @FK_NAME + ‘] FOREIGN KEY (‘ + @FKCOLUMNS + ‘) REFERENCES [‘ + @PKTABLE_OWNER + ‘].[‘ + @PKTABLE_NAME + ‘] (‘ + @PKCOLUMNS + ‘) ON UPDATE ‘ + CASE @UPDATE_RULE WHEN 0 THEN ‘ NO ACTION ‘ WHEN 1 THEN ‘ CASCADE ‘ WHEN 2 THEN ‘ SET_NULL ‘ END + ‘ ON DELETE ‘ + CASE @DELETE_RULE WHEN 0 THEN ‘ NO ACTION ‘ WHEN 1 THEN ‘ CASCADE ‘ WHEN 2 THEN ‘ SET_NULL ‘ END + ” + CASE @FK_NOT_FOR_REPLICATION WHEN 0 THEN ” WHEN 1 THEN ‘ NOT FOR REPLICATION ‘ END —PRINT @ql_Create INSERT INTO #FkTable( sql_DROP,sql_Trunc,sql_Create ) VALUES ( @sql_DROP,@sql_Trunc, @sql_Create ) FETCH NEXT FROM cursor_fkeys INTO @FK_NAME,@FK_OBJECTID, @FK_DISABLED, @FK_NOT_FOR_REPLICATION, @DELETE_RULE, @UPDATE_RULE, @FKTABLE_NAME, @FKTABLE_OWNER, @PKTABLE_NAME, @PKTABLE_OWNER END CLOSE cursor_fkeys DEALLOCATE cursor_fkeys DECLARE cursor_fkeysCmd CURSOR FOR SELECT sql_Trunc,sql_DROP FROM #FkTable OPEN cursor_fkeysCmd FETCH NEXT FROM cursor_fkeysCmd INTO @sql_Trunc,@sql_DROP WHILE @@FETCH_STATUS = 0 BEGIN EXEC sp_executesql @sql_DROP EXEC sp_executesql @sql_Trunc FETCH NEXT FROM cursor_fkeysCmd INTO @sql_Trunc,@sql_DROP END CLOSE cursor_fkeysCmd DEALLOCATE cursor_fkeysCmd
- Find the tables have identity field and SET IDENTITY INSERT ON to these tables to allow the insert values to IDENTITY columns.
- Using Database Export Import wizard we import data to every tables in the specific schema.
- Recreate the foreign key constants scripts taken on step 2.
- SET IDENTITY INSERT OFF to all IDENTITY tables in specific schema.
August 6, 2012Posted by on
We have an audit application which database uses service broker to receive the audit messages. The system handling a large number of audit messages per second from each device and as time grows more device were registered to application and in the meantime the message transfer rate to broker also grown exponentially.
Dealing with an application with high volume of message, the service broker queue can grow a very large. Reading or deleting messages from the queues also showing a large number, so obviously there could be high number of fragmentations can present to these queues table which may slow down the read and delete operation to the queue.
We also have noticed that sometimes messages receive processing hit a threshold from where it could not recover. The queue processing rate slow down below the incoming rate, as a result queue start growing.
Queues are hidden tables and there is neither ALTER QUEUE … REBUILD nor ALTER QUEUE … REORGANIZE statement can be execute to these hidden tables. But if we login as a DAC to the SQL server then we will able to view these hidden tables .Once we are connected by DAC the following query returns the name of the internal table that backs each Service Broker queue in the database.
Great!!! with DAC connection we can reindex Service Broker queue internal table.
Here is the scripts we use in our environment to rebuild the queues internal tables. We rebuild only the queues has more than 200 rows. But this number can be change depending on the environment.
August 3, 2012Posted by on
Recently we restored a database to a test environment provided by a customer for schema upgrade. Database was large in size and schema change will generate a large log so we store the log file in large drive but the log file growth was so large that log file run out of the disk space as a result the upgrade process was failed. Database is SIMPLE recovery and schema upgrade is planned to run in batches model. So after each batch transactions log file should be truncated and log file will go back to initial size. But log file was continuously grown after each batch schema modification. Database is large in size also in test environment so no backup has taken to this database.
The scripts changes schema experienced these messages once the process is failed.
/* — Command Failed — */
The transaction log for database ‘XXXXX’ is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases.
The log_resuse_wait_desc column value in sys.databases dmv is showing NOTHING.
We would like to verify whether the database recovery model has changed after restore so we focus our investigation to error logs and trace files.
There is a very good tutorial by SIMON how to find out the when a Database recovery model has changed. We modified the script from him blogs for our need to find more details about database recovery modification.
set nocount on
declare @searchString1 varchar(255)
declare @searchString2 varchar(255)
set @searchString1 = ‘RECOVERY’
set @searchString2 = ‘OPTION’
declare @logs table (LogNo int, StartDate Datetime, FileSize int)
declare @results table (LogFileNo int, LogDate Datetime, ProcessInfo varchar(20),Text varchar(max))
insert into @logs EXEC master..sp_enumerrorlogs
declare cLogs cursor for select LogNo from @logs
declare @LogNo int
fetch cLogs into @LogNo
while @@fetch_status =0
insert into @results(LogDate, ProcessInfo, Text)
EXEC sp_readerrorlog @LogNo, 1, @searchString1,@searchString2
update @results set LogFileNo = @LogNo where LogFileNo is null
fetch cLogs into @LogNo
select * from @results
from fn_trace_gettable(‘xxxxxxxxxxxxxxxxxxxxxxxxxx’,null) t
join @results r on t.StartTime between dateadd(ms,-150,r.logDate) and dateadd(ms,150,r.logdate)
and t.spid = substring(r.ProcessInfo,5,10) –required to enable a hash join to be used
where t.EventClass = 164
and EventsubClass = 1
set statistics time off
–Use of a temp table results in a nested loo[ join but also a spool
The result we found from the above query.
The result of the query gave us some fact that database was full recovery model when restored and we changed the recovery model to SIMPLE on some time period showing in StartTime column. But database was never taken any full backup after change recovery model. Due to this fact the database was generating logs to transaction log file as in full recovery model.
We take full backup to the database and restart the schema modification scripts and this time the schema changes are completed successfully.
May 3, 2012Posted by on
Our users were able to access the Reporting Services URL when they connected to the server via RDP but they were unable to access the Reporting Services URL remotely through the browser. The Report Services was running under a domain account with sufficient permission. When we changed the service account to “Networks Services” users were able to access the URL remotely and via RDP on the server. Unfortunately, the company’s security policy prohibits using any account other than a domain account to run services so we were unable to use the “Network Services” account.
When users try to connect the Reporting Services URL (Web Service and Report Manager) remotely via the browser, the user is prompted three times to enter credentials. After the third prompt, the user is presented with a blank screen.
1.SQL 2008 R2 Reporting Service is installed in Windows 2008 server
2.Windows Firewall is turned off for specific reporting services port
3.Service domain account has Sysadmin privilege to SQL server
4.Service domain account is a member of the Local Administrator group
5.The user connecting to Reporting Services has System Admin and Content Management privileges.
After further research and investigation we learned we had three possible solutions for this error.
1. Delete the specific Authorization Types in rsreportserver.xml;
2. Change the Reporting Server Service Account to “Network Services”; or
3. Configure a specific host header for SSRS, configure DNS server to understand that host header, and then configure the domain controller to have an SPN for the host header and the Reporting Server Service Account.
In our case, solution 2 was not an option given the security requirements. Solution 3 required additional network team resources in addition to security approval so we decided to try Solution 1, delete the Authorization Types in rsreportserver.xml.
To delete the specific Authorization Types inside the rsreportserver.xml file, open …\Program Files\Microsoft SQL Server\MSRS10_50.MSSQLSERVER\Reporting Services\ReportServersreportserver.xml. Search the file for the “AuthenticationTypes” section. If the section contains RSWindowsNegotiate or RSWindowsBasic then either delete them or comment them out. Here is what the section should look like after you comment them out.
Once the file has been changed, restart the services.
After following those instructions, we were able to access the Reporting Services URL (Web Service and Report Manager) remotely via the browser.
Hope this may help some of you troubleshooting reporting services.
July 16, 2011Posted by on
It is important to know the database files size and free space available to each of the database files. Server monitoring tool may send alert messages by email when server drives free space is exceed to threshold limit. If the database files are set auto growth (most of the production servers set this option) then whenever data file need to grow it will grow based on the option selected in Auto growth/Max Size window. But sometimes we may not have enough free space in the data drive so the operating system will not complete the SQL server data file growth request. The following script will give us details idea about each database data files auto growth setting and well as how much free space available in the data file.
if exists ( select * from tempdb.dbo.sysobjects o
where o.xtype in (‘U’) and o.id = object_id( N’tempdb..#fileInfo’ ))
drop table #fileInfo
set nocount on
create table #fileInfo
[ID] int not null identity (1, 1) primary key,
[databaseID] smallint not null,
[fileID] smallint not null ,
[fileSize] int not null ,
[fileUsed] int not null ,
[fileUnused] int not null
declare @sql nvarchar(4000)
set @sql =
‘use [‘+’?’+’] ;
if db_name() N”?” goto Error_Exit
insert into #fileInfo
— All sizes are calculated in MB
on ( aa.groupid = bb.groupid )
exec sp_msforeachdb @sql
,case when saf.status & 0x100000 = 0 then convert(varchar,ceiling((saf.growth * 8192)/(1024.0*1024.0))) + ‘ MB’
else convert (varchar, saf.growth) + ‘ Percent’
end as Growth
,convert(decimal(18,2),(sum(size)*8)/1024.0) as db_size_in_mb
,convert(decimal(18,2),db_info.fileUsed) as file_used_mb
,convert(decimal(18,2),db_info.fileUnused) as file_unused_mb
from sys.sysaltfiles where dbid=saf.dbid and groupid=0) as log_size_in_mb
from sys.sysaltfiles saf
join sys.databases d on saf.dbid=d.database_id
join #fileInfo db_info
on saf.dbid = db_info.databaseID and saf.fileid = db_info.fileID
group by dbid,d.name,d.compatibility_level,Growth,saf.status,
order by name
The output of this query will look like this.
June 15, 2011Posted by on
Audit Log Trimming Job in Share Point 2010 tracks all activities that occur within the environment. The site collection administrator has the ability to set auditing requirements within the environment that determine the types of actions that should be tracked.
But one of the Share Point Server audit is configured and the audit purge is not scheduled. Audit Log Trimming Job was not running for more than a year so the AuditData table has grown to 65 GB in size, as well as more than 2 billions of record.
One of the Share Point admin scheduled this job to run every day during the low activities, in the database server. The job kicks out for the very first time and continues running for 12 hours but not completed. This job was deleting data from AuditData table. As a result, a large number of locks created to the database server and users were not able to access some functionality and sites in Share Point.
Due to SLA agreement for minimum downtime during business hours, we had to kill the session and all transactions were rolled back. This also took a considerable time to rollback. Later on we decided to purge the data manually instead of running the job and keeping the data up to one month old in the Audit table. Once we manually delete data from table, we rescheduled the job again and that time the job ran successfully in few minutes. Here is the procedure we followed to purge the AuditData Table.
— Take a full backup of the Database.
BACKUP DATABASE [xxx] TO DISK = N’F:\xxxxxxx.BAK’ WITH NOFORMAT, NOINIT, NAME = N’xxxx-Full Database Backup’, SKIP, NOREWIND, NOUNLOAD, STATS = 10
Find out which site in SharePoint had most numbers of data in AuditData table by using following query.
WITH SiteLogin (tp_SiteID,tp_ID,tp_SiteAdmin,tp_Login,RowNumber) AS
,ROW_NUMBER() over(partition by tp_SiteID order by tp_SiteID) RowNumber
where tp_SiteAdmin = 1
select FullUrl,a.SiteId,COUNT(a.SiteId) as total , Max(a.Occurred) as MaxDate,
MIN(a.Occurred) as MinDate,tp_ID
from AuditData a inner join Sites s on a.SiteId = s.Id
inner join SiteLogin l on a.SiteId = l.tp_SiteID
where l.RowNumber = 1
group by FullUrl,SiteId,tp_ID
order by FullUrl
We saved this query result in excel file. Later we used this excel file to get the parameter values for the store procedure we used in the steps.
Table has more than 2 billions of record and database was full recovery mode and we didn’t have enough free disk space in the log drive. So we had to delete the rows in a batch by selecting the occurred date in AuditData table. In our case, we found that the 95% rows in the table belonged to one site. So we deleted the rows for that site in a batch and other sites in one statement.
Disable all non-cluster indexes in AuditData table.
This improved the delete operation time.
alter index AuditData_OnItem on dbo.AuditData disable
— In this step we created a temp table.
— This table stored the key values (which were Siteld and Occurred columns) from AuditData Table. We filtered the key values based on occurred column data in AuditData table. So the temp table stored only the key values for that particular batch.
— Later delete query joined temp table with AuditData table on key values to delete the rows.
— Joining with temp table to the base table during large number of data delete would create less numbers of Key level locks in the database.
declare @SiteId uniqueidentifier
declare @EndDate datetime
declare @count int
declare @UserId int
set @SiteId = —- From Step 2 excel file
set @EndDate = ‘xxxxxxxxx’
set @UserId = — From Step 2 excel file.
@EndData values would be between MaxData and MinData in excel file to the corresponding SiteId. We decided one month as our batch interval. That might be different to your environment depending how the data are speared in the table based on occurred data column for that particular site.
CREATE TABLE #AuditDataDelete
[SiteId] [uniqueidentifier] NOT NULL,
[Occurred] [datetime] NOT NULL
) ON [PRIMARY]
ALTER TABLE #AuditDataDelete ADD PRIMARY KEY CLUSTERED ([SiteId],[Occurred])
Insert Into @AuditDataDelete
From AuditData with(nolock)
Where [SiteId] = @SiteId and [Occurred] < @EndDate
From AuditData AD
Inner Join AuditDataDelete TD
On TD.[SiteId] = AD.[SiteId] and TD.[Occurred] = AD.[Occurred]
SET @count = @@ROWCOUNT
— In this step we recorded in the AuditData table about the previous step delete statistics.
— In order to record the statistics we had to run proc_AddSiteAuditEntryFromSql store procedure.
— This store procedure has a few parameters and here were the values for the parameters we applied
UserID:from Step 2 excel file.
SiteId :from Step 2 excel file.
EventID: 50 default.
– Followed the steps to find the AuditDataEvenparameter Value.
@count – previous step
@EndDate – Previous Step.
DECLARE @AuditEventData nvarchar(4000)
DECLARE @Ret int
SET @Ret = 0
SET @AuditEventData = STUFF(STUFF(”, 3, 0, ‘DeleteEntriesInfo’), 1, 0, STUFF(STUFF(STUFF(”, 3, 0, ‘Rows’), 1, 0, STUFF(@count, 1, 0, STUFF(”, 2, 0, ‘Rows’))) + STUFF(STUFF(”, 3, 0, ‘EndDate’), 1, 0, STUFF(@EndDate, 1, 0, STUFF(”, 2, 0, ‘EndDate’))), 1, 0, STUFF(”, 2, 0, ‘DeleteEntriesInfo’)))
EXEC @Ret = proc_AddSiteAuditEntryFromSql
–Database was in Full recovery mode, so after each batch we had to backup transaction log. Otherwise the log file would grow very large and we might experience run out of disk space error.
— Time stamp also added to each Log backup file so previous log file wouldn’t be not overwritten.
— This way if we need to require restored we could do to a point of time.
— Log file also shrunk in this step in order to free up VLF.
DECLARE @sql nvarchar(4000)
DECLARE @filename nvarchar(255)
DECLARE @folder nvarchar(255)
DECLARE @full_path_and_filename nvarchar(1000)
DECLARE @err_msg nvarchar(2000)
DECLARE @crlf varchar(2)
SET @crlf = CHAR (13) + CHAR (10)
SET @folder = ‘X:\XXXXX\’
SET @filename = ‘XXXXX’ + ‘_backup_’ + REPLACE (REPLACE (REPLACE (CONVERT (char (16), CURRENT_TIMESTAMP, 120), ‘-‘, ”), ‘ ‘, ”), ‘:’, ”)
SET @full_path_and_filename = @folder + @filename + ‘.TRN’
SET @sql = ‘BACKUP ‘ +’LOG’ + ‘ ‘ + QUOTENAME (‘VIP_Sites’) + @crlf
SET @sql = @sql + ‘TO DISK = ‘ + QUOTENAME (@full_path_and_filename,””) + @crlf
SET @sql = @sql + ‘WITH’ + @crlf
SET @sql = @sql + ‘ NOFORMAT,’ + @crlf
SET @sql = @sql + ‘ NOINIT,’ + @crlf
SET @sql = @sql + ‘ NAME = ‘ + QUOTENAME (@filename,””) + ‘,’ + @crlf
SET @sql = @sql + ‘ SKIP, NOREWIND, NOUNLOAD, STATS = 10 ‘ + ‘,’ + @crlf
SET @sql = LEFT (@sql, LEN (@sql) – 3)
DBCC SHRINKFILE (XXXXXXXX,1)
We repeated the step 4 in a multiple batch depending on Site’s data in AuditData table. The goal was to keep all sites audit data only for one month. In our case, if any site had more the 9 millions of record in AuditData table, we considered multiple batch delete for this site. Otherwise we deleted this site’s data in a single batch. Besides the decision whether to consider multiple or single batch; deleting will depend on how much free space is in your log drive. So transaction log file can grow, as well as transaction log backup interval and your downtime of the application.
— The table might have internal fragmentation after this large delete operation. So we ran this DBCC command to recover the empty space from the table and this command will recover space from a table ifthat table has any variable column present.
DBCC CLEANTABLE (‘xxxx’, ‘AuditData’)
— Rebuild the Cluster Index
ALTER INDEX AuditData_OnSiteOccurred ON dbo.AuditData REBUILD;
— Rebuild all non-cluster indexes.
ALTER INDEX AuditData_OnItem ON dbo.AuditData REBUILD;
June 6, 2011Posted by on
We had an problem with remote DTC or Distributed transaction connection with one of our test SQL Server. We experienced “RPC Endpoint Mapper” and “firewall has closed the ports errors” error messages during a BizTalk server installation. Due to Architectural design the BizTalk Server and SQL server are in different machine and during installation BizTalk server couldn’t connect to SQL server through MSDTC services. Through further investigation, the following steps we followed to fix this remote connection issue.
- The very first step we confirmed that MSDTC services were configured correctly to both servers. There is an excellent tutorial posted by Mr. Denny how to configure the MSDTC services in Windows 2008 server. Please follow that tutorial and setup the MSDTC services and security properties.
- Once the MSDTC was installed and configured in Windows 2008R2 the local DTC security configuration should be look like below screenshot (in our environment). To view the local DTC security configuration we need to GO Start >Administrative Tools > Component Services, it will open a Component Services Window, on the window navigate to Component Services > Computer > My Computer > Distributed Transactional Coordinator. Right click Local DTC and select properties and it will pop up Local DTC Properties windows.
We also confirmed that boxes marked in red under Security Setting and Transactional Manager Communication also checked.
- As a part of project design BizTalk and database servers were in different machine so we configured Local DTC security configuration to both server’s.
- Firewall configurations are described in KB250367 to control RPC dynamic port allocation. We had done this configuration on both servers. Note also that the firewall must be open in both directions for the specified ports. Here are the steps:
a) To start Registry Editor, click Start, click Run, type regedt32, and then click OK.
b) You must use Regedt32.exe, rather than Regedit.exe, because Regedit.exe does not support the REG_MULTI_SZ data type that is required for the Ports value.
c) Expand the tree by double-clicking the folders named in the following path:
d) Click the RPC folder, and then click Add Key on the Edit menu.
e) In the Add Key dialog box, in the Key Name box, type Internet, and then click OK.
f) Add a key for Internet, by using the following values:
Data Type: REG_MULTI-SZ
g) Add another key for Internet, by using the following values:
Data Type: REG_SZ
h) Add another key for Internet, by using the following values:
Data Type: REG_SZ
i) Restart the computer. When RPC restarts, it will assign incoming ports dynamically, based on the registry values that we have specified.Once we successfully completed the above steps the new registry should be look like this.
- The next step to open these ports and msdtc.exe program as exceptions in the firewall. Here are the screenshots after we configured the firewall.
a) Msdtc.exe program exception.
b) TCP/IP ports.
135 – RPC EPM (end point mapper)
5000-5100 MSDTC [Dynamically assigned a port by the EPM]
Use the DTCTester Tool
We downloaded and installed dtctester to both servers. DTCTester tool tests a distributed transaction against a specified Microsoft SQL Server. This tool helps to test distributed transactions across firewalls or against networks. dtctester performs distributed transactions by using ODBC API against a SQL Server database. This tool will create a temp table in the SQL server as well as inserts a record to the created temp table and at the end it will commit the changes.
The following were the steps we followed for testing DTC with DTCTester tool.
- We created an ODBC data source for our SQL Server through the ODBC utility to the server where BizTalk Server had installed.
- Navigate to the folder in command prompt where the dtctester.exe file was extracted.
- Executed the following from the command line:
dtctester <dsn name><user name><password>
Replace the values in brackets as appropriate for your environment.
- We configured the ODBC Data source and executed the above command to both servers. If the test is successful then you will see these messages in command window.
- We have seen the above messages in DTCTester result window in both servers which proved that MSDTC service was working remotely and locally.
Hopefully this post can help you to debug some MSDTC connection error.
May 28, 2011Posted by on
Recently I had a project where I install sql 2008r2 server. The installation failed due to a compressed drive where SQL is installed. Here is the error message.
After further investigation I found from the drive properties that “Compress this drive to save disk space” was checked.
Unchecked this option and another window will open. On this new window check “Apply changes to drive and subfolders and files” radio button and click ok. It may take some time to compete the process depending on the drive’s capacity.
Restart the installation again. This time the installation will complete successfully.
May 6, 2011Posted by on
Recently i worked on a share SQL server environment where many SharePoint 2010 databases were present in a Single SQL server instance. Those SharePoint databases have been upgraded from 2007 version to 2010, rendering some of the databases in SharePoint 2007 become obsolete. These databases can be deleted from the server to free up space in the drives.
The SQL server has five logical drives connecting to individual LUN in SAN, but the users were complaining about the application’s latency. The users had been experiencing the ongoing issue regarding latency for a long period of time.
Although SharePoint creates several databases in the SQL server, due to space limitation and logical drives, the data files spread randomly to different drives in the server based on data file size.
We ran the perfmon and collect data for the server. Using in depth analysis on perfmon data we figured out that the databases in the server had a performance problem with IO request.
Due to the file spread on multiple drives, it is possible to have several busy data files present in the same drive and these files share the same IO resources in the LUN.
The following query was executed to find out the individual database files IO access statistics in the entire SQL Instance:
declare @Mbytes_read as dec(18, 2)
declare @Mbytes_written as dec(18, 2)
declare @disk_Mbytes as dec(18, 2)
@Mbytes_read = SUM(num_of_bytes_read),
@Mbytes_written = SUM(num_of_bytes_written),
@disk_Mbytes = SUM(size_on_disk_bytes)
FROM sys.dm_io_virtual_file_stats (NULL, NULL)
where file_id <> 2
select DB_NAME(s.database_id) as [Database],
CONVERT(VARCHAR(12),CAST((num_of_bytes_read/(1024*1024)) as dec(10,2))) + ‘ (‘ + CONVERT(varchar(12),CONVERT(dec(11, 2), num_of_bytes_read*100.0/@Mbytes_read)) + ‘ %)’ AS ‘Mbytes_read (%)’,
ReadLatency = CASE WHEN num_of_reads = 0 THEN 0 ELSE (io_stall_read_ms / num_of_reads) END,
CONVERT(VARCHAR(12),CAST((num_of_bytes_written/(1024*1024)) as dec(10,2))) + ‘ (‘ + CONVERT(varchar(12),CONVERT(dec(11, 2), num_of_bytes_written*100.0/@Mbytes_written)) + ‘ %)’ AS ‘Mbytes_written (%)’,
WriteLatency = CASE WHEN num_of_writes = 0 THEN 0 ELSE (io_stall_write_ms / num_of_writes) END,
CONVERT(VARCHAR(12),CAST((size_on_disk_bytes/(1024*1024)) as dec(10,2))) + ‘ (‘ + CONVERT(varchar(12),CONVERT(dec(11, 2), size_on_disk_bytes*100.0/@disk_Mbytes)) + ‘ %)’ AS ‘disk_Mbytes (%)’,
CONVERT(VARCHAR(12),CAST((io_stall_read_ms/(1000*60)) as dec(10,2))) + ‘ (‘ + CONVERT(varchar(12),CONVERT(dec(11, 2), io_stall_read_ms*100.0/io_stall)) + ‘ %)’ AS ‘io_stall_read_minute (%)’,
CONVERT(VARCHAR(12),CAST((io_stall_write_ms/(1000*60)) as dec(10,2))) + ‘ (‘ + CONVERT(varchar(12),CONVERT(dec(11, 2), io_stall_write_ms*100.0/io_stall)) + ‘ %)’ AS ‘io_stall_write_minute (%)’,
CAST((io_stall/(1000*60)) as dec(10,2)) as io_stall_minute,
f.name as [Logical Name],
substring(f.filename,0,2) as [Drive Letter],
f.filename as [File Full Path]
from sys.sysaltfiles f
inner join sys.dm_io_virtual_file_stats (NULL, NULL) s
on f.dbid = s.database_id
and f.fileid = s.file_id
where s.file_id 2
order by DB_NAME(s.database_id)
The screenshot below is a portion of the query output:
I have calculated the percentage of “read and write” for individual data files compared to entire SQL Server. I divided the individual data file read and write value with the entire server read and write value to determine each of the percentages. This query output also included the read and write latency as well as total disk space used by each data file.
The following instructions highlight the steps for obtaining the data above to improve the server IO requests:
1. Delete obsolete database to free up space in drive (Make sure I took and saved the latest database full backup in case if we need these database again).
2. Keep the biggest data files in the existing drive.
3. Move the data files to different drives based on Read Write percentage values of these files so each drive does not flood with most read or write data files.
4. I found tempDB has the most read and write percentage so i moved tempDB data files completely separate high performance LUN.