sqlorcl

Just another WordPress.com site

Install SQL Server in ubuntu 16.04 Step by Step

Finally Microsoft announced and released  a stable version SQL Server for Linux  called SQL Server vNext. I have been following the cumulative upgrade of SQL rpm for Linux since they first announced on March 2016 but those installation were not very pleasant specially on RedHat environment. This new release installation process is very simple compare to previous releases. However this new version of SQL has Ubuntu edition and only available in enterprise edition.

I installed Ubuntu 16.04 (desktop edition)  on a Oracle Virtual Machine (Free) . Fortunately I was able to install SQL server in Ubuntu during weekend (While watching Alabama football game!!) . I captured screenshot every steps I did during installation and saved them a file upload this blog. I  would like to share this document to the my blog.

 

Microsoft also release SQL Management Studio for different versions of Linux including Ubuntu but I couldn’t find any rpm for Linux or Ubuntu. Once I find the download location of this rpm I will update the document how to install  SSMS. Please download the attachement and see if that help.

Welcome and Enjoy Linux World!!!

instsll-sql-in-linux

SQL server agent Xps disabled

SQL server agent- agent Xps disabled

There is some interesting observation regarding this error. If we have seen this error we may check this following

1. Check whether xps is disabled?

  We will run the query to see if the value is set to 1

select name,value from sys.configurations where name= ‘Agent XPs’

2. If the value is set to 0 then we will execute the following statements to enable the configuration.

sp_configure ‘show advanced options’,1
go
reconfigure with override
go
sp_configure ‘Agent XPs’,1
go
reconfigure with override
go
sp_configure ‘show advanced options’,0
go
reconfigure with override
go

3. Restart SQL Agent.

Even though we followed the above steps still agent is showing disable and not able to browse the agent.

Here is some interesting observation.

  1. Checking the error log file, we found there is a process “SQLAgent –Email loader” change the value to 0 after SQL agent restart.
  2. Verified any store procedures or anything running during agent restart and nothing is found.
  3. Check the SQL Service startup parameter and found T3608 is added.

This trace flag will bypass recovery for all databases including system databases except master database so the change values not reflect when agent startup and agent values are goes back  to default value which is 0.

  1. Remove this trace flag from startup parameter and restart the agent. Once SQL agent restart is completed the disable message has disappeared and we were able to browse the agent.

Please check any startup parameter for troubleshooting any SQL server issue.

Refresh a schema in a database

We have several databases and we need to refresh only some Schemas not the entire database periodically. Objects of the schema are not changed that frequent so schema refresh only deal with data. Some of the tables in the schema are extremely large and deleting the rows from these tables will take a very long time and the delete operation will generate large log file. Best solution would be to truncate all tables of a respective schema and insert data from source tables. We decide to follow the steps describing below for schema refresh.

  1. Scripts out all existing foreign key constraints and indexed views to all tables in a particular schema to a temp table.
  • This temp table will contain scripts to drop schema, create schema and truncate table command.
  • Truncate table operation will be failed if that table is reference to a foreign key constraints or indexed view.  So we need to drop these constraints or indexed view if there anything present.
  • Drop any merge or transactional replication setting to the table. Fortunately we don’t have any replication setting to our tables. Replication setting will fail the table TRUNCATION.

There is a great post by Greg Robidoux how to scripts out the Enable, Disable, Drop and Recreate FKs based on Primary Key table. We are using this script and modified a portion of the script for our requirement. I added the modified version of the scripts in this blog.

DECLARE @tableName sysname
DECLARE @schemaName sysname
DECLARE @ID INT
SET @schemaName = ‘XXX’
DECLARE @cmd NVARCHAR(1000)
DECLARE @sql_DROP nvarchar(150)
DECLARE @sql_Trunc nvarchar(150)
DECLARE @sql_Create nvarchar(500)
 
IF OBJECT_ID(‘tempdb..#FkTable’) IS NOT NULL
DROP TABLE #FkTable
 
CREATE TABLE #FkTable
(
ID INT IDENTITY(1,1),
sql_DROP nvarchar(150),
sql_Trunc nvarchar(150),
sql_Create nvarchar(500)
)
 
DECLARE
@FK_NAME sysname,
@FK_OBJECTID INT,
@FK_DISABLED INT,
@FK_NOT_FOR_REPLICATION INT,
@DELETE_RULE smallint,
@UPDATE_RULE smallint,
@FKTABLE_NAME sysname,
@FKTABLE_OWNER sysname,
@PKTABLE_NAME sysname,
@PKTABLE_OWNER sysname,
@FKCOLUMN_NAME sysname,
@PKCOLUMN_NAME sysname,
@CONSTRAINT_COLID INT
 
DECLARE cursor_fkeys CURSOR FOR
SELECT Fk.name,
Fk.OBJECT_ID,
Fk.is_disabled,
Fk.is_not_for_replication,
Fk.delete_referential_action,
Fk.update_referential_action,
OBJECT_NAME(Fk.parent_object_id) AS Fk_table_name,
schema_name(Fk.schema_id) AS Fk_table_schema,
TbR.name AS Pk_table_name,
schema_name(TbR.schema_id) Pk_table_schema
FROM sys.foreign_keys Fk LEFT OUTER JOIN
sys.tables TbR ON TbR.OBJECT_ID = Fk.referenced_object_id –inner join
WHERE schema_name(TbR.schema_id) = @schemaName
 
OPEN cursor_fkeys
FETCH NEXT FROM cursor_fkeys
INTO @FK_NAME,@FK_OBJECTID,
@FK_DISABLED,
@FK_NOT_FOR_REPLICATION,
@DELETE_RULE,
@UPDATE_RULE,
@FKTABLE_NAME,
@FKTABLE_OWNER,
@PKTABLE_NAME,
@PKTABLE_OWNER
 
WHILE @@FETCH_STATUS = 0
BEGIN
 
— create statement for dropping FK and also for recreating FK
— drop statement
SET @sql_DROP = ‘ALTER TABLE [‘ + @FKTABLE_OWNER + ‘].[‘ + @FKTABLE_NAME
+ ‘] DROP CONSTRAINT [‘ + @FK_NAME + ‘]’
 
SET @sql_Trunc = ‘TRUNCATE TABLE [‘ + @FKTABLE_OWNER + ‘].[‘ + @FKTABLE_NAME + ‘]’
— PRINT @sql_DROP
 
— create process
 
DECLARE @FKCOLUMNS VARCHAR(1000), @PKCOLUMNS VARCHAR(1000), @COUNTER INT
— create cursor to get FK columns
 
DECLARE cursor_fkeyCols CURSOR FOR
SELECT COL_NAME(Fk.parent_object_id, Fk_Cl.parent_column_id) AS Fk_col_name,
COL_NAME(Fk.referenced_object_id, Fk_Cl.referenced_column_id) AS Pk_col_name
FROM sys.foreign_keys Fk LEFT OUTER JOIN
sys.tables TbR ON TbR.OBJECT_ID = Fk.referenced_object_id INNER JOIN
sys.foreign_key_columns Fk_Cl ON Fk_Cl.constraint_object_id = Fk.OBJECT_ID
WHERE TbR.name = @PKTABLE_NAME
AND schema_name(TbR.schema_id) = @schemaName
AND Fk_Cl.constraint_object_id = @FK_OBJECTID — added 6/12/2008
ORDER BY Fk_Cl.constraint_column_id
 
OPEN cursor_fkeyCols
 
FETCH NEXT FROM cursor_fkeyCols INTO @FKCOLUMN_NAME,@PKCOLUMN_NAME
 
SET @COUNTER = 1
SET @FKCOLUMNS = ”
SET @PKCOLUMNS = ”
 
WHILE @@FETCH_STATUS = 0
BEGIN
IF @COUNTER > 1
BEGIN
SET @FKCOLUMNS = @FKCOLUMNS + ‘,’
SET @PKCOLUMNS = @PKCOLUMNS + ‘,’
END
SET @FKCOLUMNS = @FKCOLUMNS + ‘[‘ + @FKCOLUMN_NAME + ‘]’
SET @PKCOLUMNS = @PKCOLUMNS + ‘[‘ + @PKCOLUMN_NAME + ‘]’
SET @COUNTER = @COUNTER + 1
 
FETCH NEXT FROM cursor_fkeyCols INTO @FKCOLUMN_NAME,@PKCOLUMN_NAME
END
CLOSE cursor_fkeyCols
DEALLOCATE cursor_fkeyCols
— generate create FK statement
SET @sql_Create = ‘ALTER TABLE [‘ + @FKTABLE_OWNER + ‘].[‘ + @FKTABLE_NAME + ‘] WITH ‘ +
CASE @FK_DISABLED
WHEN 0 THEN ‘ CHECK ‘
WHEN 1 THEN ‘ NOCHECK ‘
END + ‘ ADD CONSTRAINT [‘ + @FK_NAME
+ ‘] FOREIGN KEY (‘ + @FKCOLUMNS
+ ‘) REFERENCES [‘ + @PKTABLE_OWNER + ‘].[‘ + @PKTABLE_NAME + ‘] (‘
+ @PKCOLUMNS + ‘) ON UPDATE ‘ +
CASE @UPDATE_RULE
WHEN 0 THEN ‘ NO ACTION ‘
WHEN 1 THEN ‘ CASCADE ‘
WHEN 2 THEN ‘ SET_NULL ‘
END + ‘ ON DELETE ‘ +
CASE @DELETE_RULE
WHEN 0 THEN ‘ NO ACTION ‘
WHEN 1 THEN ‘ CASCADE ‘
WHEN 2 THEN ‘ SET_NULL ‘
END + ” +
CASE @FK_NOT_FOR_REPLICATION
WHEN 0 THEN ”
WHEN 1 THEN ‘ NOT FOR REPLICATION ‘
END
—PRINT @ql_Create
 
INSERT INTO #FkTable( sql_DROP,sql_Trunc,sql_Create ) VALUES ( @sql_DROP,@sql_Trunc, @sql_Create )
FETCH NEXT FROM cursor_fkeys
INTO @FK_NAME,@FK_OBJECTID,
@FK_DISABLED,
@FK_NOT_FOR_REPLICATION,
@DELETE_RULE,
@UPDATE_RULE,
@FKTABLE_NAME,
@FKTABLE_OWNER,
@PKTABLE_NAME,
@PKTABLE_OWNER
END
CLOSE cursor_fkeys
DEALLOCATE cursor_fkeys
DECLARE cursor_fkeysCmd CURSOR FOR
SELECT sql_Trunc,sql_DROP FROM #FkTable
OPEN cursor_fkeysCmd
FETCH NEXT FROM cursor_fkeysCmd INTO @sql_Trunc,@sql_DROP
 
WHILE @@FETCH_STATUS = 0
BEGIN
EXEC sp_executesql @sql_DROP
EXEC sp_executesql @sql_Trunc
 
FETCH NEXT FROM cursor_fkeysCmd INTO @sql_Trunc,@sql_DROP
END
 
CLOSE cursor_fkeysCmd
DEALLOCATE cursor_fkeysCmd
 
  • Find the tables have identity field and SET IDENTITY INSERT ON to these tables to allow the insert values to IDENTITY columns.
DECLARE @sql_Table nvarchar(100)
DECLARE @sql_Create nvarchar(150)
 
DECLARE cursor_idyTable CURSOR FOR
SELECT ‘[‘ + SCHEMA_NAME(OBJECTPROPERTY(OBJECT_ID,’SCHEMAID’)) + ‘].[‘ + OBJECT_NAME(OBJECT_ID) + ‘]’ as TABLE__NAME
FROM SYS.COLUMNS
WHERE COLUMNPROPERTY(OBJECT_ID,NAME, ‘IsIdentity’) = 1
AND SCHEMA_NAME (OBJECTPROPERTY(OBJECT_ID,’SCHEMAID’)) = ‘XXX’
OPEN cursor_idyTable
 
FETCH NEXT FROM cusrsor_idyTable INTO @sql_Table
 
WHILE @@FETCH_STATUS = 0
BEGIN
SET @sql_Create = ‘SET IDENTITY_INSERT ‘ + @sql_Table + ‘ ON’
EXEC sp_execcutesql @sql_Create
FETCH NEXT FROM cusror_idyTable INTO @sql_Table
END
 
CLOSE cusror_idyTable
DEALLOCATE cusror_idyTable
  • Using Database Export Import wizard we import data to every tables in the specific schema.
  • Recreate the foreign key constants scripts taken on step 2.
DECLAREcursor_fkeysCmdCURSORFOR
SELECTsql_CreateFROM#FkTable
OPENcursor_fkeysCmd 
 
FETCHNEXTFROMcursor_fkeysCmdINTO@sql_Create
  
WHILE@@FETCH_STATUS= 0  
BEGIN
       EXECsp_executesql @sql_Create
       FETCHNEXTFROMcursor_fkeysCmdINTO@sql_Create
END
      
CLOSEcursor_fkeysCmd  
DEALLOCATEcursor_fkeysCmd
  • SET IDENTITY INSERT OFF to all IDENTITY tables in specific schema.

Enjoy!!!!

Rebuild Service Broker Queue internal table indexes

We have an audit application which database uses service broker to receive the audit messages. The system handling a large number of audit messages per second from each device and as time grows more device were registered to application and in the meantime the message transfer rate to broker also grown exponentially.

Dealing with an application with high volume of message, the service broker queue can grow a very large. Reading or deleting messages from the queues also showing a large number, so obviously there could be high number of fragmentations can present to these queues table which may slow down the read and delete operation to the queue.

We also have noticed that sometimes messages receive processing hit a threshold from where it could not recover. The queue processing rate slow down below the incoming rate, as a result queue start growing.

Queues are hidden tables and there is neither ALTER QUEUE … REBUILD nor ALTER QUEUE … REORGANIZE statement can be execute to these hidden tables. But if we login as a DAC to the SQL server then we will able to view these hidden tables .Once we are connected by DAC the following query returns the name of the internal table that backs each Service Broker queue in the database.

Great!!! with DAC connection we can reindex Service Broker queue internal table.

Here is the scripts we use in our environment to rebuild the queues internal tables. We rebuild only the queues has more than 200 rows. But this number can be change depending on the environment.

Database SIMPLE recovery but log file still not truncate

Recently we restored a database to a test environment provided by a customer for schema upgrade. Database was large in size and schema change will generate a large log so we store the log file in large drive but the log file growth was so large that log file run out of the disk space as a result the upgrade process was failed. Database is SIMPLE recovery and schema upgrade is planned to run in batches model. So after each batch transactions log file should be truncated and log file will go back to initial size. But log file was continuously grown after each batch schema modification. Database is large in size also in test environment so no backup has taken to this database.

The scripts changes schema experienced these messages once the process is failed.

/* — Command Failed — */
The transaction log for database ‘XXXXX’ is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases.

The log_resuse_wait_desc column value in sys.databases dmv is showing NOTHING.
We would like to verify whether the database recovery model has changed after restore so we focus our investigation to error logs and trace files.

There is a very good tutorial by SIMON how to find out the when a Database recovery model has changed. We modified the script from him blogs for our need to find more details about database recovery modification.

set nocount on
declare @searchString1 varchar(255)
declare @searchString2 varchar(255)
set @searchString1 = ‘RECOVERY’
set @searchString2 = ‘OPTION’

declare @logs table (LogNo int, StartDate Datetime, FileSize int)
declare @results table (LogFileNo int, LogDate Datetime, ProcessInfo varchar(20),Text varchar(max))
insert into @logs EXEC master..sp_enumerrorlogs

declare cLogs cursor for select LogNo from @logs
declare @LogNo int
open cLogs
fetch cLogs into @LogNo
while @@fetch_status =0
begin
insert into @results(LogDate, ProcessInfo, Text)
EXEC sp_readerrorlog @LogNo, 1, @searchString1,@searchString2
update @results set LogFileNo = @LogNo where LogFileNo is null
fetch cLogs into @LogNo
end
deallocate cLogs

select * from @results

select starttime,*
from fn_trace_gettable(‘xxxxxxxxxxxxxxxxxxxxxxxxxx’,null) t
join @results r on t.StartTime between dateadd(ms,-150,r.logDate) and dateadd(ms,150,r.logdate)
and t.spid = substring(r.ProcessInfo,5,10) –required to enable a hash join to be used
where t.EventClass = 164
and EventsubClass = 1
set statistics time off
–Use of a temp table results in a nested loo[ join but also a spool

The result we found from the above query.


The result of the query gave us some fact that database was full recovery model when restored and we changed the recovery model to SIMPLE on some time period showing in StartTime column. But database was never taken any full backup after change recovery model. Due to this fact the database was generating logs to transaction log file as in full recovery model.

We take full backup to the database and restart the schema modification scripts and this time the schema changes are completed successfully.

Reporting Services generates blank screen after prompting for credentials three times.

Our users were able to access the Reporting Services URL when they connected to the server via RDP but they were unable to access the Reporting Services URL remotely through the browser. The Report Services was running under a domain account with sufficient permission. When we changed the service account to “Networks Services” users were able to access the URL remotely and via RDP on the server. Unfortunately, the company’s security policy prohibits using any account other than a domain account to run services so we were unable to use the “Network Services” account.

Symptom:

When users try to connect the Reporting Services URL (Web Service and Report Manager) remotely via the browser, the user is prompted three times to enter credentials. After the third prompt, the user is presented with a blank screen.

Preliminary check:

1.SQL 2008 R2 Reporting Service is installed in Windows 2008 server
2.Windows Firewall is turned off for specific reporting services port
3.Service domain account has Sysadmin privilege to SQL server
4.Service domain account is a member of the Local Administrator group
5.The user connecting to Reporting Services has System Admin and Content Management privileges.

Solution:
After further research and investigation we learned we had three possible solutions for this error.

1. Delete the specific Authorization Types in rsreportserver.xml;
2. Change the Reporting Server Service Account to “Network Services”; or
3. Configure a specific host header for SSRS, configure DNS server to understand that host header, and then configure the domain controller to have an SPN for the host header and the Reporting Server Service Account.

In our case, solution 2 was not an option given the security requirements. Solution 3 required additional network team resources in addition to security approval so we decided to try Solution 1, delete the Authorization Types in rsreportserver.xml.

To delete the specific Authorization Types inside the rsreportserver.xml file, open …\Program Files\Microsoft SQL Server\MSRS10_50.MSSQLSERVER\Reporting Services\ReportServersreportserver.xml. Search the file for the “AuthenticationTypes” section. If the section contains RSWindowsNegotiate or RSWindowsBasic then either delete them or comment them out. Here is what the section should look like after you comment them out.

Once the file has been changed, restart the services.

After following those instructions, we were able to access the Reporting Services URL (Web Service and Report Manager) remotely via the browser.

Hope this may help some of you troubleshooting reporting services.

Track Database file growth

It is important to know the database files size and free space available to each of the database files. Server monitoring tool may send alert messages by email when server drives free space is exceed to threshold limit. If the database files are set auto growth (most of the production servers set this option) then whenever data file need to grow it will grow based on the option selected in Auto growth/Max Size window. But sometimes we may not have enough free space in the data drive so the operating system will not complete the SQL server data file growth request. The following script will give us details idea about each database data files auto growth setting and well as how much free space available in the data file.

if exists ( select * from tempdb.dbo.sysobjects o
where o.xtype in (‘U’) and o.id = object_id( N’tempdb..#fileInfo’ ))
drop table #fileInfo
go

set nocount on
go
create table #fileInfo
(
[ID] int not null identity (1, 1) primary key,
[databaseID] smallint not null,
[fileID] smallint not null ,
[fileSize] int not null ,
[fileUsed] int not null ,
[fileUnused] int not null
)
go

declare @sql nvarchar(4000)
set @sql =
‘use [‘+’?’+’] ;
if db_name() N”?” goto Error_Exit

insert into #fileInfo
(
[databaseID],
[fileID],
[fileSize],
[fileUsed],
[fileUnused]
)

Select
DB_ID(),
aa.fileid,
— All sizes are calculated in MB
convert(int,round((aa.size*1.000)/128.000,0)),
convert(int,round(fileproperty(aa.name,”SpaceUsed”)/128.000,0)),
convert(int,round((aa.size-fileproperty(aa.name,”SpaceUsed”))/128.000,0))
from
dbo.sysfiles aa
left join
dbo.sysfilegroups bb
on ( aa.groupid = bb.groupid )

Error_Exit:

exec sp_msforeachdb @sql

select dbid
,d.name
,d.compatibility_level
,d.recovery_model_desc
,case when saf.status & 0x100000 = 0 then convert(varchar,ceiling((saf.growth * 8192)/(1024.0*1024.0))) + ‘ MB’
else convert (varchar, saf.growth) + ‘ Percent’
end as Growth
,convert(decimal(18,2),(sum(size)*8)/1024.0) as db_size_in_mb
,convert(decimal(18,2),db_info.fileUsed) as file_used_mb
,convert(decimal(18,2),db_info.fileUnused) as file_unused_mb
,(select convert(decimal(18,2),(size*8)/1024.0)
from sys.sysaltfiles where dbid=saf.dbid and groupid=0) as log_size_in_mb
from sys.sysaltfiles saf
join sys.databases d on saf.dbid=d.database_id
join #fileInfo db_info
on saf.dbid = db_info.databaseID and saf.fileid = db_info.fileID
where groupid>0
group by dbid,d.name,d.compatibility_level,Growth,saf.status,
db_info.fileUsed,db_info.fileUnused,d.recovery_model_desc
order by name

The output of this query will look like this.

Audit Log Trimming Job in Share point run in a multiple batch

Audit Log Trimming Job in Share Point 2010 tracks all activities that occur within the environment. The site collection administrator has the ability to set auditing requirements within the environment that determine the types of actions that should be tracked.
But one of the Share Point Server audit is configured and the audit purge is not scheduled. Audit Log Trimming Job was not running for more than a year so the AuditData table has grown to 65 GB in size, as well as more than 2 billions of record.
One of the Share Point admin scheduled this job to run every day during the low activities, in the database server. The job kicks out for the very first time and continues running for 12 hours but not completed. This job was deleting data from AuditData table. As a result, a large number of locks created to the database server and users were not able to access some functionality and sites in Share Point.
Due to SLA agreement for minimum downtime during business hours, we had to kill the session and all transactions were rolled back. This also took a considerable time to rollback. Later on we decided to purge the data manually instead of running the job and keeping the data up to one month old in the Audit table. Once we manually delete data from table, we rescheduled the job again and that time the job ran successfully in few minutes. Here is the procedure we followed to purge the AuditData Table.

Step 1:
— Take a full backup of the Database.

USE [master]
GO

BACKUP DATABASE [xxx] TO DISK = N’F:\xxxxxxx.BAK’ WITH NOFORMAT, NOINIT, NAME = N’xxxx-Full Database Backup’, SKIP, NOREWIND, NOUNLOAD, STATS = 10
GO

Step 2:

Find out which site in SharePoint had most numbers of data in AuditData table by using following query.

WITH SiteLogin (tp_SiteID,tp_ID,tp_SiteAdmin,tp_Login,RowNumber) AS
(
select tp_SiteID,tp_ID,tp_SiteAdmin,tp_Login
,ROW_NUMBER() over(partition by tp_SiteID order by tp_SiteID) RowNumber
from dbo.UserInfo
where tp_SiteAdmin = 1
)

select FullUrl,a.SiteId,COUNT(a.SiteId) as total , Max(a.Occurred) as MaxDate,
MIN(a.Occurred) as MinDate,tp_ID
from AuditData a inner join Sites s on a.SiteId = s.Id
inner join SiteLogin l on a.SiteId = l.tp_SiteID
where l.RowNumber = 1
group by FullUrl,SiteId,tp_ID
order by FullUrl

We saved this query result in excel file. Later we used this excel file to get the parameter values for the store procedure we used in the steps.

Table has more than 2 billions of record and database was full recovery mode and we didn’t have enough free disk space in the log drive. So we had to delete the rows in a batch by selecting the occurred date in AuditData table. In our case, we found that the 95% rows in the table belonged to one site. So we deleted the rows for that site in a batch and other sites in one statement.

Step 3:

Disable all non-cluster indexes in AuditData table.
This improved the delete operation time.

USE [xxxxxx]
GO

alter index AuditData_OnItem on dbo.AuditData disable
GO

Step 4.a:
— In this step we created a temp table.
— This table stored the key values (which were Siteld and Occurred columns) from AuditData Table. We filtered the key values based on occurred column data in AuditData table. So the temp table stored only the key values for that particular batch.
— Later delete query joined temp table with AuditData table on key values to delete the rows.
— Joining with temp table to the base table during large number of data delete would create less numbers of Key level locks in the database.

USE [xxxxx]
GO

declare @SiteId uniqueidentifier
declare @EndDate datetime
declare @count int
declare @UserId int

set @SiteId = —- From Step 2 excel file
set @EndDate = ‘xxxxxxxxx’
set @UserId = — From Step 2 excel file.

@EndData values would be between MaxData and MinData in excel file to the corresponding SiteId. We decided one month as our batch interval. That might be different to your environment depending how the data are speared in the table based on occurred data column for that particular site.

CREATE TABLE #AuditDataDelete
(
[SiteId] [uniqueidentifier] NOT NULL,
[Occurred] [datetime] NOT NULL
) ON [PRIMARY]

GO

ALTER TABLE #AuditDataDelete ADD PRIMARY KEY CLUSTERED ([SiteId],[Occurred])
GO

Insert Into @AuditDataDelete
Select [SiteId],[Occurred]
From AuditData with(nolock)
Where [SiteId] = @SiteId and [Occurred] < @EndDate

Delete AD
From AuditData AD
Inner Join AuditDataDelete TD
On TD.[SiteId] = AD.[SiteId] and TD.[Occurred] = AD.[Occurred]

SET @count = @@ROWCOUNT

Step 4.b:
— In this step we recorded in the AuditData table about the previous step delete statistics.
— In order to record the statistics we had to run proc_AddSiteAuditEntryFromSql store procedure.
— This store procedure has a few parameters and here were the values for the parameters we applied
UserID:from Step 2 excel file.
SiteId :from Step 2 excel file.
EventID: 50 default.

— Followed the steps to find the AuditDataEvenparameter Value.

@count – previous step
@EndDate – Previous Step.

DECLARE @AuditEventData nvarchar(4000)
DECLARE @Ret int
SET @Ret = 0

SET @AuditEventData = STUFF(STUFF(”, 3, 0, ‘DeleteEntriesInfo’), 1, 0, STUFF(STUFF(STUFF(”, 3, 0, ‘Rows’), 1, 0, STUFF(@count, 1, 0, STUFF(”, 2, 0, ‘Rows’))) + STUFF(STUFF(”, 3, 0, ‘EndDate’), 1, 0, STUFF(@EndDate, 1, 0, STUFF(”, 2, 0, ‘EndDate’))), 1, 0, STUFF(”, 2, 0, ‘DeleteEntriesInfo’)))

EXEC @Ret = proc_AddSiteAuditEntryFromSql
@SiteId,
@UserId,
50,
@AuditEventData

Step 4.c:
–Database was in Full recovery mode, so after each batch we had to backup transaction log. Otherwise the log file would grow very large and we might experience run out of disk space error.
— Time stamp also added to each Log backup file so previous log file wouldn’t be not overwritten.
— This way if we need to require restored we could do to a point of time.
— Log file also shrunk in this step in order to free up VLF.

DECLARE @sql nvarchar(4000)
DECLARE @filename nvarchar(255)
DECLARE @folder nvarchar(255)
DECLARE @full_path_and_filename nvarchar(1000)
DECLARE @err_msg nvarchar(2000)
DECLARE @crlf varchar(2)
SET @crlf = CHAR (13) + CHAR (10)

SET @folder = ‘X:\XXXXX\’
SET @filename = ‘XXXXX’ + ‘_backup_’ + REPLACE (REPLACE (REPLACE (CONVERT (char (16), CURRENT_TIMESTAMP, 120), ‘-‘, ”), ‘ ‘, ”), ‘:’, ”)
SET @full_path_and_filename = @folder + @filename + ‘.TRN’

SET @sql = ‘BACKUP ‘ +’LOG’ + ‘ ‘ + QUOTENAME (‘VIP_Sites’) + @crlf
SET @sql = @sql + ‘TO DISK = ‘ + QUOTENAME (@full_path_and_filename,””) + @crlf
SET @sql = @sql + ‘WITH’ + @crlf
SET @sql = @sql + ‘ NOFORMAT,’ + @crlf
SET @sql = @sql + ‘ NOINIT,’ + @crlf
SET @sql = @sql + ‘ NAME = ‘ + QUOTENAME (@filename,””) + ‘,’ + @crlf
SET @sql = @sql + ‘ SKIP, NOREWIND, NOUNLOAD, STATS = 10 ‘ + ‘,’ + @crlf
SET @sql = LEFT (@sql, LEN (@sql) – 3)

–PRINT @sql
EXEC (@sql)
GO

DBCC SHRINKFILE (XXXXXXXX,1)
GO

Step 5:
We repeated the step 4 in a multiple batch depending on Site’s data in AuditData table. The goal was to keep all sites audit data only for one month. In our case, if any site had more the 9 millions of record in AuditData table, we considered multiple batch delete for this site. Otherwise we deleted this site’s data in a single batch. Besides the decision whether to consider multiple or single batch; deleting will depend on how much free space is in your log drive. So transaction log file can grow, as well as transaction log backup interval and your downtime of the application.

Step 6:
— The table might have internal fragmentation after this large delete operation. So we ran this DBCC command to recover the empty space from the table and this command will recover space from a table ifthat table has any variable column present.

USE [xxxxx]
GO
DBCC CLEANTABLE (‘xxxx’, ‘AuditData’)
GO

Step 7:
— Rebuild the Cluster Index

USE [xxxxxx]
GO

ALTER INDEX AuditData_OnSiteOccurred ON dbo.AuditData REBUILD;
GO

Step 8:
— Rebuild all non-cluster indexes.

USE [xxxxxx]
GO

ALTER INDEX AuditData_OnItem ON dbo.AuditData REBUILD;
GO

MSDTC is failing with remote server

We had an problem with remote DTC or Distributed transaction connection with one of our test SQL Server. We experienced “RPC Endpoint Mapper” and “firewall has closed the ports errors” error messages during a BizTalk server installation. Due to Architectural design the BizTalk Server and SQL server are in different machine and during installation BizTalk server couldn’t connect to SQL server through MSDTC services. Through further investigation, the following steps we followed to fix this remote connection issue.

  1. The very first step we confirmed that MSDTC services were configured correctly to both servers. There is an excellent tutorial posted by Mr. Denny how to configure the MSDTC services in Windows 2008 server.  Please follow that tutorial and setup the MSDTC services and security properties.
  2. Once the MSDTC was installed and configured in Windows 2008R2 the local DTC security configuration should be look like below screenshot (in our environment). To view the local DTC security configuration we need to GO Start >Administrative Tools > Component Services, it will open a Component Services Window, on the window navigate to Component Services > Computer > My Computer > Distributed Transactional Coordinator. Right click Local DTC and select properties and it will pop up Local DTC Properties windows.
      

       
     
     
     

    We also confirmed that boxes marked in red under Security Setting and Transactional Manager Communication also checked.

  3. As a part of project design BizTalk and database servers were in different machine so we configured Local DTC security configuration to both server’s.
  4. Firewall configurations are described in KB250367 to control RPC dynamic port allocation. We had done this configuration on both servers. Note also that the firewall must be open in both directions for the specified ports. Here are the steps:
    a) To start Registry Editor, click Start, click Run, type regedt32, and then click OK.
    b) You must use Regedt32.exe, rather than Regedit.exe, because Regedit.exe does not support the REG_MULTI_SZ data type that is required for the Ports value.
    c) Expand the tree by double-clicking the folders named in the following path:
    HKEY_LOCAL_MACHINE\Software\Microsoft\Rpc
    d) Click the RPC folder, and then click Add Key on the Edit menu.
    e) In the Add Key dialog box, in the Key Name box, type Internet, and then click OK.
    f) Add a key for Internet, by using the following values:
    Value: Ports
    Data Type: REG_MULTI-SZ
    Data: 5000-5020
    g) Add another key for Internet, by using the following values:
    Value: PortsInternetAvailable
    Data Type: REG_SZ
    Data: Y
    h) Add another key for Internet, by using the following values:
    Value: UseInternetPorts
    Data Type: REG_SZ
    Data: Y
    i) Restart the computer. When RPC restarts, it will assign incoming ports dynamically, based on the registry values that we have specified.Once we successfully completed the above steps the new registry should be look like this. 

      

  5. The next step to open these ports and msdtc.exe program as exceptions in the firewall. Here are the screenshots after we configured the firewall.
    a) Msdtc.exe program exception.

    b) TCP/IP ports.

    135 – RPC EPM (end point mapper)
    5000-5100 MSDTC [Dynamically assigned a port by the EPM]

Use the DTCTester Tool 

We downloaded and installed dtctester to both servers. DTCTester tool tests a distributed transaction against a specified Microsoft SQL Server. This tool helps to test distributed transactions across firewalls or against networks. dtctester performs distributed transactions by using ODBC API against a SQL Server database. This tool will create a temp table in the SQL server as well as inserts a record to the created temp table and at the end it will commit the changes.
The following were the steps we followed for testing DTC with DTCTester tool.  

  1. We created an ODBC data source for our SQL Server through the ODBC utility to the server where BizTalk Server had installed.
  2. Navigate to the folder in command prompt where the dtctester.exe file was extracted.
  3. Executed the following from the command line:
    dtctester <dsn name><user name><password>
    Replace the values in brackets as appropriate for your environment.
  4. We configured the ODBC Data source and executed the above command to both servers. If the test is successful then you will see these messages in command window. 
  5. We have seen the above messages in DTCTester result window in both servers which proved that MSDTC service was working remotely and locally.

Hopefully this post can help you to debug some MSDTC connection error.

Hard Drive Compression failed SQL Installation

Recently I had a project where I install sql 2008r2 server. The installation failed due to a compressed drive where SQL is installed. Here is the error message.

After further investigation I found from the drive properties that “Compress this drive to save disk space” was checked. 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Unchecked this option and another window will open. On this new window check “Apply changes to drive and subfolders and files” radio button and click ok.  It may take some time to compete the process depending on the drive’s capacity.

 

 

 

 

 

 

 

 

 

 

 

 

 

Restart the installation again.  This time the installation will complete successfully.