sqlorcl

Just another WordPress.com site

Balancing Multiple Drives IO Operation by Distributing Data Files in Share SQL Environment

Recently i worked on a share SQL server environment where many SharePoint 2010 databases were present in a Single SQL server instance. Those SharePoint databases have been upgraded from 2007 version to 2010, rendering some of the databases in SharePoint 2007 become obsolete. These databases can be deleted from the server to free up space in the drives.
The SQL server has five logical drives connecting to individual LUN in SAN, but the users were complaining about the application’s latency. The users had been experiencing the ongoing issue regarding latency for a long period of time.
Although SharePoint creates several databases in the SQL server, due to space limitation and logical drives, the data files spread randomly to different drives in the server based on data file size.
We ran the perfmon and collect data for the server. Using in depth analysis on perfmon data we figured out that the databases in the server had a performance problem with IO request.
Due to the file spread on multiple drives, it is possible to have several busy data files present in the same drive and these files share the same IO resources in the LUN.
The following query was executed to find out the individual database files IO access statistics in the entire SQL Instance:

declare @Mbytes_read as dec(18, 2)
declare @Mbytes_written as dec(18, 2)
declare @disk_Mbytes as dec(18, 2)

select
@Mbytes_read = SUM(num_of_bytes_read),
@Mbytes_written = SUM(num_of_bytes_written),
@disk_Mbytes = SUM(size_on_disk_bytes)
FROM sys.dm_io_virtual_file_stats (NULL, NULL)
where file_id <> 2

select DB_NAME(s.database_id) as [Database],

CONVERT(VARCHAR(12),CAST((num_of_bytes_read/(1024*1024)) as dec(10,2))) + ‘ (‘ + CONVERT(varchar(12),CONVERT(dec(11, 2), num_of_bytes_read*100.0/@Mbytes_read)) + ‘ %)’ AS ‘Mbytes_read (%)’,

ReadLatency = CASE WHEN num_of_reads = 0 THEN 0 ELSE (io_stall_read_ms / num_of_reads) END,

CONVERT(VARCHAR(12),CAST((num_of_bytes_written/(1024*1024)) as dec(10,2))) + ‘ (‘ + CONVERT(varchar(12),CONVERT(dec(11, 2), num_of_bytes_written*100.0/@Mbytes_written)) + ‘ %)’ AS ‘Mbytes_written (%)’,

WriteLatency = CASE WHEN num_of_writes = 0 THEN 0 ELSE (io_stall_write_ms / num_of_writes) END,

CONVERT(VARCHAR(12),CAST((size_on_disk_bytes/(1024*1024)) as dec(10,2))) + ‘ (‘ + CONVERT(varchar(12),CONVERT(dec(11, 2), size_on_disk_bytes*100.0/@disk_Mbytes)) + ‘ %)’ AS ‘disk_Mbytes (%)’,
CONVERT(VARCHAR(12),CAST((io_stall_read_ms/(1000*60)) as dec(10,2))) + ‘ (‘ + CONVERT(varchar(12),CONVERT(dec(11, 2), io_stall_read_ms*100.0/io_stall)) + ‘ %)’ AS ‘io_stall_read_minute (%)’,
CONVERT(VARCHAR(12),CAST((io_stall_write_ms/(1000*60)) as dec(10,2))) + ‘ (‘ + CONVERT(varchar(12),CONVERT(dec(11, 2), io_stall_write_ms*100.0/io_stall)) + ‘ %)’ AS ‘io_stall_write_minute (%)’,
CAST((io_stall/(1000*60)) as dec(10,2)) as io_stall_minute,
f.name as [Logical Name],
substring(f.filename,0,2) as [Drive Letter],
f.filename as [File Full Path]

from sys.sysaltfiles f
inner join sys.dm_io_virtual_file_stats (NULL, NULL) s
on f.dbid = s.database_id
and f.fileid = s.file_id
where s.file_id 2
order by DB_NAME(s.database_id)

The screenshot below is a portion of the query output:

I have calculated the percentage of “read and write” for individual data files compared to entire SQL Server. I divided the individual data file read and write value with the entire server read and write value to determine each of the percentages. This query output also included the read and write latency as well as total disk space used by each data file.
The following instructions highlight the steps for obtaining the data above to improve the server IO requests:
1. Delete obsolete database to free up space in drive (Make sure I took and saved the latest database full backup in case if we need these database again).
2. Keep the biggest data files in the existing drive.
3. Move the data files to different drives based on Read Write percentage values of these files so each drive does not flood with most read or write data files.
4. I found tempDB has the most read and write percentage so i moved tempDB data files completely separate high performance LUN.

Advertisements

Database Properties Page Bug in Denali

I have found some interesting information in Denali Database property Windows.

I have installed two instance of Denali x64 version of Enterprise Edition into two different server but in both instances i found this interesting information. In database properties windows both General and Files page showing different sizes of database. I am not sure whether this a bug in Denali or not

In General Page:

In File Page:

Using the Utility Control Point to track the log file growth

Recently we have moved a SQL Server to a different hardware. The existing Sql Server version is 2008 R2 and the server is
registered a to Utility Control Point server running separate Instance.

As usual in new hardware we keep all log file in separate drive with Raid 10 configuration.
To figure out the how large the log drive should in order to avoid run out disk space error we used
UCP for historical log file size data.

During this historical data period several process or job running to server may have increased the log file size
so the information we had was fair enough to justified the log file growth in future.

We can use the Utility control GUI tool to find out the largest log file size but if we have too many databases in
one server like in share environment it wouldn’t be a feasible in some cases.

But using the following query can find the historical log file size for databases to a specific or all instances.

I also have added the minimum processing date for log file when maximum size have reached for specific databases
in the query. The reason I add this date to the query that if any database existing log file size is quite smaller
compare to maximum size we can go back to that date to find out what was running on that date and this can help us
to figure out future log file size growth if that process run again so we don’t have to face run out of disk space issue.

use sysutility_mdw — Change with your UCP database name
go

WITH MaxTranUsages
(processing_time,database_name,dbfile_name,used_space_bytes,allocated_space_bytes,RowNumber) AS
(
select
processing_time,database_name,dbfile_name
,used_space_bytes
,allocated_space_bytes
,ROW_NUMBER() over (partition by database_name
order by allocated_space_bytes desc,processing_time) RowNumber

from sysutility_ucp_core.space_utilization_internal
where server_instance_name = ‘Your Server Instance Name’
— Ignore this line if all instance are use
and object_type = 7 — Use any or all object Type depending on Report

—- Object_type — 7 Log File
—- Object_type — 6 Data File
—- Object_type — 4 Database
)

select
database_name,CONVERT(VARCHAR(20), processing_time, 100) as processing_time,dbfile_name

,CAST((used_space_bytes/(1024*1024)) as numeric(10,2)) as used_space_Mbytes
,CAST((allocated_space_bytes/(1024*1024)) as numeric(10,2)) as allocated_space_Mbytes

,RowNumber
from MaxTranUsages
where RowNumber = 1
order by database_name

Integration Service is running on dynamic port can’t connect remotely

Recently one of our DBAs Installed a Named Instance of SQL server 2008 R2 x 64 enterprise edition to one of our test environments. Integration Services were also installed and incorporated to the installation. Due to the company’s security policy, we had to change the TCP/IP default port number in SQL Server Network Configuration, from dynamic to static, with a specific port number.

The Developmental team members were not part of the local Server Administration group so we followed several steps, described in detail by Microsoft KB, to configure a method of connecting Integration Services Server remotely.

But the Developmental team members complained that they couldn’t connect to Integration Services from their local machine.

Investigation Technical Details
During the investigation process I tried to connect to integration services with a test user profile, but I wasn’t able to connect. However, when I did the RDP to the server, I was able to connect Integration successfully. Through further investigation, I have found the following information related to SQL Server:

o Port 135 was opened in firewall for Integration Service.
o SQL Browser was running.
o Alias was created for SQL Server.
o Using netstat command I found that SSIS was not running under port 135.

Also, I have verified, through netstat command, that SSIS was running under dynamic port and each time I restarted the SSIS, the service was running under different ports within the range of 49152 to 65535. For testing purposes, I opened above range dynamic ports in firewall and was able to connect Integration Services remotely with a test user. However, company policy prohibits opening a range of ports in firewall.

Solution
After thorough investigation and research online, we took several steps to fix this issue, so the SSIS can connect remotely. In summary:
1. Open new static port for SSIS in Windows firewall
2. Edit registry entry. HKEY_CLASSES_ROOT\AppID\{xxxxx-xxxx-xxxx-xxxxxx} (replace with the appropriate value corresponding to your server. The end of the post summarizes how to find the AppID).
3. Create new REG_MULTI_SZ
4. Name its Endpoints
5. Value = ncacn_ip_tcp,0,#
#= new static SSIS port.
6. Restart SSIS services.
Below, the sample screenshot shows how the registry will look once the procedures, above, are followed.
 

P1

Hopefully this article will aid the process of troubleshooting SSIS remote connection.

How to Find AppID
1. Type Dcomcnfg.exe to the command prompt and press enter ,it will open the Component Services in a new window.
2. Expand Component Services- Computer and My Computer
3. Expand DCOM Config
4. Find MsDtsServer100
5. Right click MsDtsServer100 and select properties.
6. A new Property Window will open and you will see the AppID next to Application ID under general tab.

Here is the screenshot in our environment: