Do Sysadmins Dream Of Electric Sheep?

A collection of random mutterings and mumblings about Windows and other technologies…

SQL Restore Issues – “The specified resource name cannot be found in the image file”

Posted by Joe Thompson on September 9, 2013

While working through a variety of issues with a problem application upgrade on a client’s site this week, it became necessary to restore the whole server back to it’s pre-upgrade state and start again.

Fortunately, prior to the upgrade commencing, we’d taken copious backups and snapshots – the OS through VMware, the SQL DB drives using NetApp’s SnapManager for SQL tool, and the application drive using good old NetApp snapshots.

To restore was as simple as powering down the VM and restoring the OS, and then while the snapshot was down using OnCommand to select the correct (ie consistent) database snapshots and application snapshot to return to, then powering the VM back up and checking everything looked ok… (you gotta love virtual technology right? this would’ve taken ages from tape…)

I’d actually been through this process a couple of times already with this server – this upgrade was being performed by an outsourcing team and it’s fair to say they were making heavy work of it. The first couple worked like a charm, but this time when I opened up SQL Management Studio I got an error that the database was not online.

When I had a look in Services, the SQL Server service was showing as Stopping. Which was nice…

Next stop: event logs, and in the System Log I found this:

Level: Error
Source: Service Control Manager 
Event ID: 7024 
The SQL Server (DBNAME) service terminated with service-specific error The specified resource name cannot be found in the image file

A quick dabble on Google pointed me towards an issue with database configuration. I checked the database and logfile drives, and the correct folders were there with the correct files.

Onwards! To the SQL ERRORLOG (by default, the error log is located at Program Files\Microsoft SQL Server\MSSQL.n\MSSQL\LOG\ERRORLOG and ERRORLOG.n files) where I found this:

2013-09-09 11:00:10.64 spid10s     Clearing tempdb database.
2013-09-09 11:00:10.65 spid10s     Error: 5123, Severity: 16, State: 1.
2013-09-09 11:00:10.65 spid10s     CREATE FILE encountered operating system error 3(failed to retrieve text for this error. Reason: 15100) while attempting to open or create the physical file ‘T:\TEMPDB\DBNAME\tempdb.mdf’.
2013-09-09 11:00:10.65 spid10s     Error: 17204, Severity: 16, State: 1.
2013-09-09 11:00:10.65 spid10s     FCB::Open failed: Could not open file T:\TEMPDB\DBNAME\tempdb.mdf for file number 1.  OS error: 3(failed to retrieve text for this error. Reason: 15105).
2013-09-09 11:00:10.66 spid10s     Error: 5120, Severity: 16, State: 101.
2013-09-09 11:00:10.66 spid10s     Unable to open the physical file “T:\TEMPDB\DBNAME\tempdb.mdf”. Operating system error 3: “3(failed to retrieve text for this error. Reason: 15105)”.
2013-09-09 11:00:10.66 spid10s     Error: 1802, Severity: 16, State: 4.
2013-09-09 11:00:10.66 spid10s     CREATE DATABASE failed. Some file names listed could not be created. Check related errors.
2013-09-09 11:00:10.66 spid10s     Error: 5123, Severity: 16, State: 1.
2013-09-09 11:00:10.66 spid10s     CREATE FILE encountered operating system error 3(failed to retrieve text for this error. Reason: 15105) while attempting to open or create the physical file ‘T:\TEMPDB\DBNAME\tempdb.mdf’.
2013-09-09 11:00:10.66 spid10s     Could not create tempdb. You may not have enough disk space available. Free additional disk space by deleting other files on the tempdb drive and then restart SQL Server. Check for additional errors in the event log that may indicate why the tempdb files could not be initialized.
2013-09-09 11:00:10.66 spid10s     SQL Trace was stopped due to server shutdown. Trace ID = ‘1’. This is an informational message only; no user action is required.

Ah ha – nearly there…

I had a look on the T: drive and realised I hadn’t restored this particular volume, as the tempdb is recreated from fresh when the database is restarted anyway.

Unfortunately I hadn’t planned on the application upgrade team renaming the tempdb folder in a kind of crude backup way to make way for a fresh install of SQL. When the old DB tried to start up, it couldn’t find the path.

I simply renamed the folder back to it’s original name and the service started like a charm.

Posted in Uncategorized | Leave a Comment »

SQL Server – Kill Processes Script

Posted by Joe Thompson on July 17, 2013

Borrowed from the very lovely Andy White and reproduced here to make my life easier!!

The following script will kill all processes connected to a database of your choosing, just replace the database name in line 8 below with the one you’re working on…

 

-- Kill all processes connected to a database.
use master; 

declare @DatabaseName varchar(50);
declare @ppid varchar(20);
declare @Command varchar(50); 

set @DatabaseName = 'MyDatabase'; 

print 'This query''s SPID: ' + convert(varchar, @@spid); 

-- Select all SPIDs except the SPID for this connection
declare SpidCursor cursor for
select spid from master.dbo.sysprocesses
where dbid = db_id(@DatabaseName)
and spid != @@spid 

open SpidCursor 

fetch next from SpidCursor into @spid 

while @@fetch_status = 0
begin
    print 'Killing process: ' + rtrim(@spid);
    set @Command = 'kill ' + rtrim(@spid) + ';';
    print @Command;
    execute(@Command);
    fetch next from SpidCursor into @spid
end 

close SpidCursor
deallocate SpidCursor 

Thanks Andy!

Posted in SQL | 2 Comments »

IIS Website Suddenly Dropped Connectivity – NTLM Provider Fix

Posted by Joe Thompson on July 16, 2013

I ran into an issue on a client site recently where users were suddenly unable to connect to a website hosted on IIS using the server hostname.

What was unusual in this instance was that we had an alternative IP and hostname set up for a virtual application running on the same server, and our users could connect to the same site using either the primary IP, the alternative IP, or the alternative hostname, but not the original hostname that they had been using up until this point.

Unfortunately the users did not report this at the time it occurred, preferring instead to wait several weeks until the same fault occurred on an identically configured server in different environment (!).

Looking in the IIS logs, 401 errors were visible so this was clearly an authentication issue.

The substatus code was 2, which is explained by Microsoft as:

HTTP 401.2: Denied by server configuration

Description

The client browser and IIS could not agree on an authentication protocol.

Common reasons

  • No authentication protocol (including anonymous) is selected in IIS. At least one authentication type must be selected.
  • Only Integrated authentication is enabled, and an older, non-Internet Explorer client browser tries to access the site.
  • Integrated authentication is through a proxy. This happens because the proxy doesn’t maintain the NTLM-authenticated connection and thus sends an anonymous request from the client to the server.

After much digging around, we finally came across an authentication setting in the IIS management console which resolved the issue:

Under Site Name > IIS > Authentication > Windows Authentication > Providers we simply needed to switch the order of the Enabled Providers to display NTLM first and Negotiate second.

As soon as we clicked OK, the site immediately began to work as expected, with no need for a server reboot or even an iisreset.

We’re still unsure of the root cause of this issue, but as one of these environments is particularly strictly controlled, we do know that the only change that we’re aware of around the time this site stopped was Microsoft patching. Unfortunately we haven’t had time to delve into exactly which patch may have caused this issue, so this is still just a theory.

It’s also fair to point out that someone may have changed something without authorisation, however the fact that this happened on two separate servers around their respective patch dates does smell somewhat fishy………….

Hopefully it’ll help someone else out there if they run into a similar sounding problem.

Posted in IIS, IIS 7.0, Microsoft, Web Hosting | Leave a Comment »

SQL Server Performance Monitoring

Posted by Joe Thompson on May 7, 2013

This week I was asked by a client to build a “performance profile” for a number of their Microsoft SQL servers in order to better understand their usage patterns, potential bottlenecks and long-term growth with a view to drastically scaling-up an existing solution.

Having the advantage of a fully operational SCOM 2012 implementation to hand, I had a quick scoot around the available performance counters and settled on the following as giving a good overall picture of the SQL database performance:

General Statistics: User Connections

As you’d expect, this counter simply show the number of connected users at any one time.

SQL Statistics: Batch Requests / Sec

The number of batch requests received by the server per second gives a good overall picture of the activity level of the SQL server. In an ideal world, this should follow a similar pattern to the user connections. Many sources seem to view 1000 requests / sec as a warning threshold for further investigation, however whether this is actually a problem will depend on many other factors.

Buffer Manager: Page Life Expectancy

Measure the average number of seconds a page will be cached for. Anything under 300s may indicate poor index design, or a potential shortage of memory.

Buffer Manager: Free List Stalls / Sec

Shows the number of requests per second which stall because no buffer is available. Any value above 2 indicates a shortage of memory.

Locks: Lock Waits / Sec

How many times users had to wait to acquire a lock on SQL resources to perform a function over the last second. This value should ideally be 0.

SQL Statistics: SQL Compilations / Sec

Shows the number of times that T-SQL needed to compile an execution plan in the last second. This value should be as low as possible (ideally less than 10% of the number of Batch Requests / Sec). Excessively high values may require rewriting of queries.

In addition to the above SQL specific counters, it’s important to grab some general OS counters to build up an overall picture of the server performance, as well as any virtual infrastructure, storage or network related info that is available to ensure no bottlenecks are overlooked.

Bonus time…..

For the mega-SQL performance geeks out there, this poster put together by Kevin Kline and Brent Ozcar (among others) explains every SQL-relevant performance counter in detail and is well worth printing out for further reference:

http://www.quest.com/backstage/images/promotions/SQLServer-Perfmonance-Poster.pdf

Posted in Microsoft, Performance, SQL | Leave a Comment »

Useful SQL Commands

Posted by Joe Thompson on April 2, 2013

The following page lists (with not much further information) a bunch of SQL commands I want to be able to remember. It will grow over time, and hopefully I’ll add some detail to it. For now though, this is just a memory pad for yours truly which I’m publishing in case it helps anyone else…

1. How much free space is in my transaction log file?

DBCC SQLPERF(logspace)

See here for more info…

2. Schedule a date-stamped transaction log backup

DECLARE @BackupFileName varchar(100)

SET @BackupFileName = 'C:\path\DB_LogBackup_' + REPLACE(REPLACE(REPLACE(CONVERT(varchar,GETDATE(), 20),'-',''),':',''),' ','') + '.BAK'

BACKUP LOG [tableName] TO DISK = @BackupFileName
WITH NOFORMAT,
	NOINIT,
	NAME = N'DB-Transaction Log Backup',
	SKIP,
	NOREWIND,
	NOUNLOAD,
	STATS = 10
GO

3. How much whitespace is there in the database?

USE dbname
GO

SELECT DB_NAME() AS DbName,
 name AS FileName,
 size/128.0 AS CurrentSizeMB,
 size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS INT)/128.0 AS FreeSpaceMB
FROM sys.database_files;

See here for more info.

4. Diagnosing slow downs and hangs in the database

First, use sp_who2 to get a list of all running spids, including any blocked processes (BlkBy column):

EXEC sp_who2

Then, find out what the problem spids are doing:

DBCC INPUTBUFFER(spid)

For more info see here and here.

5. Moving user databases

USE master;
GO

ALTER DATABASE db_name MODIFY FILE ( NAME = 'db_name', FILENAME = 'D:\db_name\db_name.mdf' );
GO
ALTER DATABASE db_name MODIFY FILE ( NAME = 'db_name_log', FILENAME = 'L:\db_name\db_name_log.ldf' );
GO

( then manually move files to new location )

ALTER DATABASE db_name SET ONLINE;
GO

6. Moving system databases (eg tempdb)

USE master;
GO

ALTER DATABASE tempdb MODIFY FILE ( NAME = 'tempdev', FILENAME = 'T:\TEMPDB\tempdb.mdf' );
GO
ALTER DATABASE tempdb MODIFY FILE ( NAME = 'templog', FILENAME = 'T:\TEMPDB\tempdb.ldf' );
GO

( then restart SQL Server Service )

Posted in SQL | Leave a Comment »

Diagnosing NetApp CPU Issues – Kahuna Bottlenecks

Posted by Joe Thompson on January 24, 2013

I’m working on a client site at the moment where one of their filers frequently exhibits slow console response. As yet (fortunately), no end users have reported performance issues. The client has asked me to investigate further to find out what’s going on before someone does start to complain.

Filer: FAS3240
OnTAP Version: 8.1.0 7-mode
Clustered: Active/Active

Notes: The cluster partner is not displaying the same symptoms. OpsManager is showing a “CPU Too Busy” warning.

1) Run sysstat -m

Running sysstat -x or -u shows a stat called “CPU”. On this system, it’s almost solidly fixed at 99%. Not good, but the system’s still operating. If the CPU’s running so hot, how can this be?

CPU NFS CIFS HTTP Net kB/s Disk kB/s Tape kB/s Cache
        in out read write read write age
99% 3484 373 0 25505 19574 102993 57939 0 0 0s
99% 3304 463 0 23901 14424 104082 48239 0 0 9s
99% 2675 511 0 35202 9846 101772 47316 0 0 0s
99% 1615 243 0 18633 6818 75278 18690 0 0 0s
99% 1950 570 0 17120 6956 61602 53518 0 0 11s
99% 2177 357 0 30155 8587 96486 58358 0 0 7s
99% 2690 388 0 23931 5428 77650 36123 0 0 0s
99% 2781 280 0 33375 10736 97023 24737 0 0 0s
99% 3055 230 0 18215 14255 73371 77096 0 0 7s
99% 2431 357 0 10294 11330 68075 47320 0 0 0s
99% 2363 258 0 11092 19261 62122 63 0 0 0s

The FAS3240 is a multiprocessor beastie running 4 concurrent CPUs. The CPU stat you’re seeing is not actually showing you the combined or averaged CPU time across all four CPUs, but is actually showing you the total time that any of the four processors is in use.

This can be a little misleading at first, as this value would show 100% if only one of the CPUs was running at 100% and the other three were idle, or if each of them were running at 25% but were running (by some freak occurrence) in sequence. Likewise, if all four CPUs were running at 100%, you would get the same value.

Running sysstat -m gives us a little more detail on which of these might be the case:

ANY AVG CPU0 CPU1 CPU2 CPU3
100% 74% 67% 72% 73% 82%
100% 70% 62% 64% 69% 85%
100% 65% 57% 62% 63% 75%
100% 63% 55% 62% 59% 77%
100% 64% 60% 62% 61% 74%
100% 65% 60% 64% 63% 74%
100% 68% 58% 65% 66% 85%
100% 56% 41% 48% 48% 86%
100% 48% 30% 39% 37% 86%
100% 49% 31% 37% 39% 88%
100% 47% 28% 33% 37% 88%
100% 49% 33% 36% 42% 83%
100% 45% 30% 34% 36% 78%
100% 51% 35% 41% 43% 83%
100% 45% 30% 34% 35% 81%

A few moments later, the ANY stat (which correlates to the CPU stat in a sysstat -x or -u) has jumped to 100% from 99%, but we can clearly see why the rest of the filer is operating normally; the AVG column shows the average CPU usage which is generally sat between 50-70% – high, but not critically so – while a scan of the individual CPU columns shows a much more balanced picture. Not one of these is maxxed out, so business is continuing as usual.

All of which is lovely, but then why is our console running slowly?

2) Run sysstat -M

To drill down a bit further, we need to use a privileged command. sysstat -M (note the upper case M) is only available in diag mode, and there is virtually no documentation I can find about it on the NetApp support site.

Running this command gives an extended CPU output which is too long for this display, so I’m going to cut it into two parts:

ANY1+ ANY2+ ANY3+ ANY4+ AVG CPU0 CPU1 CPU2 CPU3
100% 61% 43% 25% 58% 41% 47% 51% 93%
100% 62% 44% 27% 59% 43% 48% 52% 93%
100% 64% 45% 26% 60% 47% 49% 54% 88%
100% 75% 55% 36% 67% 53% 57% 64% 94%
100% 67% 49% 33% 63% 48% 52% 58% 93%
100% 67% 46% 30% 62% 47% 52% 56% 91%

The left hand side of the display shows an extended version of the same CPU info as sysstat -m gave us, with additional columns for the amount of time any one or more, two or more, three or more, or four or more CPUs are in use. (We only have four CPUs, so more is pretty much impossible!)

Network Protocol Cluster Storage Raid Target Kahuna WAFL_Ex (Kahu) WAFL_XClean SM_Exempt Cifs Exempt Intr Host Ops/s CP
24% 0% 0% 10% 19% 0% 64% 77% (36%) 4% 0% 0% 28% 5% 1% 3454 85%
27% 0% 0% 10% 19% 0% 64% 77% (36%) 4% 0% 0% 29% 5% 1% 4524 87%
31% 0% 0% 10% 18% 0% 57% 82% (42%) 4% 0% 1% 29% 6% 1% 4409 61%
35% 0% 0% 13% 25% 0% 57% 88% (42%) 7% 0% 1% 34% 6% 1% 5057 99%
26% 0% 0% 11% 23% 0% 56% 84% (43%) 6% 0% 1% 38% 5% 1% 3743 99%
33% 0% 0% 10% 18% 2% 55% 83% (44%) 4% 0% 1% 31% 7% 1% 4788 68%

The right hand side of the display shows what NetApp calls “domains” – groups of related processes which can help clearly show which part of the system may be causing problems.

More detailed information on how NetApp filers handle processes across multiple CPUs can be found here – https://kb.netapp.com/support/index?page=content&id=3010150 – but the key information to know is that although different domains can run simultaneously on different processors, each individual domain can only exist on a single CPU at any one time. This is useful, because it means that any domain showing 100% usage indicates a CPU bottleneck for that bundle of related processes. Happily, there’s none of that here.

(Obviously there are exceptions to any rule, and so the above does not apply to any domains with Ex or Exempt in the name.)

To make matters a little more complicated, Kahuna processes and WAFL_Ex(Kahu) processes (the bit in brackets) cannot run simultaneously, so a potential Kahuna bottleneck occurs when the Kahuna value and the (Kahu) value add up to 100%.

Looking again at the above sysstat output, it’s pretty clear that this is what’s happening…

Kahuna WAFL_Ex (Kahu) Kahuna + (Kahu)
64% 77% (36%) 100%
64% 77% (36%) 100%
57% 82% (42%) 99%
57% 88% (42%) 99%
56% 84% (43%) 99%
55% 83% (44%) 99%

So the Kahuna domain has hit a CPU bottleneck. What does this mean?

Well – once again, information on the NetApp support site about which processes sit within the Kahuna domain is scant, but we can reasonably infer from what’s left out when you take away the other domains (Network, Protocol, Cluster, Storage, RAID, WAFL, SnapMirror) that what we’re really talking about are the top-level system management processes: console management, performance logging, de-dupe (as it’s visible nowhere else) etc

So – what could be causing this bottleneck.

Fortunately, a quick scan of the NetApp support site pulls up a bug report that seems to fit: http://support.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=210480

Bug Detail

Bug ID

210480
Title Block reclamation scanners cause kahuna bottleneck.
Duplicate of  
Bug Severity 2 – System barely usable
Bug Status Fixed
Product Unknown
Bug Type Unknown
Description
Formatted
Block reclamation scanner returns unused blocks held by a container back to the aggregate. The amount of work the scanner does depends on the size of the flexvol it is running on and the number of blocks that are allocated in the flexvol but not in the aggregate.Some operations (e.g., snapshot deletion) can free a large number of blocks and cause the scanner to run for a long period of time. This in turn can cause high CPU utilization and a significant increase in response times.

This problem is most likely to affect low-end platforms such as FAS270.

This problem is present in 7.0.4 and later releases.

 

Workaround
Formatted
None. 
Notes
Formatted
 
Fixed-In Version Data ONTAP 7.0.5P8 (First Fixed) – Fixed
Data ONTAP 7.0.7 (GD) – Fixed
Data ONTAP 7.1.3 (GD) – Fixed
Data ONTAP 7.2.7 (GD) – Fixed
Data ONTAP 7.3.3 (GD) – Fixed
Data ONTAP 7.3.7 (GA) – Fixed
Data ONTAP 7.3.7P1 (GA) – Fixed
Data ONTAP 8.0.4 (GA) – Fixed
Data ONTAP 8.1.2 (GA) – FixedA complete list of releases where this bug is fixed is available here.
Related Bugs 168255, 212265, 185422, 299001

This bug matches our findings and our software level, and even neatly fits in with a planned upgrade to 8.1.2 we have scheduled for next weekend.

I’ll report back after and let y’all know if the fix worked…

UPDATE: The upgrade to 8.1.2 seems to have resolve the CPU issues we were having, CPU usage in sysstat is down to around 50% and the combined Kahuna values are peaking at around 60-70%.

It’s worth noting that Mike (in the comments section) quite rightly points out that the bug I’ve highlighted above is listed by NetApp as being fixed for the version of OnTAP we were running prior to the upgrade (8.1.0 – see here for the full list) however as the symptoms fit the description pretty accurately and the “fix” worked, I’ve decided to leave this as is for now in case it helps anyone else out.

Posted in NetApp | 6 Comments »

Geek fact of the day: Mebibytes vs. Megabytes

Posted by Joe Thompson on January 24, 2013

While reading some storage documentation this morning, I came across a chart which expressed disk sizes in a unit I’d never heard of before: MEBIBYTES.

“Huh? What the hell’s a mebibyte?” I wondered.

The explanation given in the text was only medium helpful:

“The numbers shown are in Mebibytes (MiBs). This unit of measure is
equivalent to 2 to the 20th power bytes. (MBs, in contrast, are 10 to the sixth power bytes.)”

Oh, ok. Thanks…:-/

A quick jump on the scientific calculator explains a bit further:

2^20 = 1,048,576

10^6 = 1,000,000

Anyone geek worth his salt knows that a kilobyte is not 1,000 bytes, but actually 1,024, hence a megabyte is acutally 1,024 x 1,024, which coincidentally equals 1,048,576. Ah ha – now we’re getting somewhere…!

A quick search on Google returned me to the ever reliable Wikipedia which solves my befuddlement with a few interesting nuggets of information.

Apparently, there is some ambiguity as to what megabyte actually means. According to Wikipedia:

“The megabyte (abbreviated as Mbyte or MB) is a multiple of the unit byte for digital information storage or transmission with three different values depending on context: 1048576 bytes (2^20) generally for computer memory; and one million bytes (10^6) generally for computer storage.”

To make matters worse, this ambiguity has cause screw ups on an global level in the past, most notably when the internationally recognised 1.44 MB floppy disk was incorrectly calculated on the basis of a “megabyte” being 1,024,000 bytes (or 1,024 x 1,000 as opposed to 1,024 x 1,024 or 1,000 x 1,000).

As a result, the IEEE Standards Committee have endorsed International Electrotechnical Commission’s definition of a Mebibyte (a combination of MEga and BInary) for situations where “use of a binary prefix makes sense”.

Phew – glad we got that cleared up then!

Posted in General Geekery | Leave a Comment »

SQL Transaction Logs – How Old?

Posted by Joe Thompson on January 9, 2013

I stumbled across a situation today while playing with some new SQL backup software where I needed to check whether the transaction logs were being properly cleared down during a “logs only” backup run.

The problem here is that simply looking at the .ldf file produced by SQL Server gives you no useful information whatsoever: the size of the file does not automatically shrink following a successful backup, the modify date does not change when new transactions are written or purged, and the file itself is effectively unreadable without specialist (often expensive) third-party tools.

Working on a client site, I was not in a position to be able to download any additional tools and use their trial period to examine the contents of the file without jumping through far too many hoops to make the effort worthwhile, so instead I jumped on everyone’s favourite search engine to see if this was a resolveable issue?

Fortunately, others had clearly come across the same issue and I quickly found the following command:

SELECT * FROM ::fn_dblog(NULL, NULL)

I jumped onto SQL Management Studio and headed for my DB, opened a new query window and dropped in the code. Hey presto, a few seconds later I had all the transactions in the log file sitting inscrutably in front of me.

I would hate to have to try and find any useful information in here (in that situation I would almost certainly try the third-party products) but helpfully, there was a column labelled “Begin Time” which clearly stated the start time and date of each transaction.

Oddly, many of these rows had no begin time, so I quickly revised the above code as follows:

SELECT * FROM ::fn_dblog(NULL, NULL)
     WHERE "Begin Time" is not null

This gave me all the information I needed.

One last note is that if your transaction log is very large or hasn’t been cleared out in some time, there will be a lot of information returned. In this instance, it’s useful to use a date / time filter to obtain the most recent / oldest information, for example:

SELECT * FROM ::fn_dblog(NULL, NULL)
     WHERE "Begin Time" > "2013/01/09 12:56:00:00"

Hope this helps😉

Posted in Microsoft, SQL | Leave a Comment »

Remote Desktop Users Getting Temporary Profiles On W2k8 Servers

Posted by Joe Thompson on May 24, 2012

Title of the post pretty much sums this issue up…

We have a growing number of where “ordinary” users require RDP access in some form or another. The first appearance of this was with a new RDS implementation on which we’ve deployed SAS alongside several other applications, but since then we’ve deployed a couple of other applications for development and monitoring where non-admin users (and hence without admin accounts) need to access the application locally on the server.

Despite several attempts to deploy Windows 7 (!) the majority of the infrastructure is still on XP. As a result, when users with roaming profiles log on to RDP sessions (or RDS sessions) the XP profiles are not compatible and a temporary profile is generated instead.

The key is to generate a Terminal Services Profile in AD which will be launched whenever the RDP session is connected. This profile has a “.V2” extension which clearly defines it as separate from the original XP profile.

However, during the early stages of testing, we found that simply creating the new TS profile entry didn’t always kickstart the new profile, especially if the user had previously been logged in on the server.

To get around this, it’s necessary to delete all traces of the original profile from the server that is generating the temporary profile:

  1. Delete the profile folder from C:\Users
  2. Delete the profile list entry from the registry – HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList
  3. (Optional) Run through the registry using the user’s SID and remove any and all traces of their ever having existed on that server!

The above steps may sound a bit like overkill, but in my experience can be invaluable in ensuring that the server does not try to pick up any remnants of an old local profile before the RDS profile is picked up…

Posted in Active Directory, RDS, Terminal Services | Leave a Comment »

SQL Performance – SAN vs NAS

Posted by Joe Thompson on April 26, 2012

Found a great article on SQL Server Central recently on the pro’s and cons of SAN vs NAS storage for SQL Servers, and more specifically, iScsi vs. NFS.

As a fellow NetApp user, I’ve also had queries about which is the better performing storage. In my experience, NetApp have taken the line that NFS is “just as good” as iScsi. Fortunately, Gregory Jackson took on the challenge to prove this fact one way or the other, and comes to the following conclusion:

So, even though, in raw IO, the various protocols might perform similarly, in a SQL Server Environment specifically, iSCSI can perform as much as 40% better than NFS.”

The whole article is worth a read, and links to several additional articles for further reading at the bottom. However if you need any evidence for a client / manager / finance department that spending a bit (ok – a lot!) extra on fibre-based iScsi connectivity, this would be a good first port of call…

Posted in NetApp, SQL | Leave a Comment »

 
Follow

Get every new post delivered to your Inbox.