Tuesday, May 6, 2014

MyChart Performance Weirdness - Part 1


Here’s the scenario:


The Epic MyChart patient portal is accessible to clients through two logical pathways: to the patient population at large from the public internet using the Epic DMZ NetScaler appliance and to Hyperspace users from the internal network using the existing McKesson NetScaler appliance.


Each path to MyChart uses the core services available to it, like DNS.  Patients connecting from the outside to mychart.baptistoncare.org resolve the name publicly, while Hyperspace users connecting from the inside to mychart.bmhcc.org resolve the name using internal DNS resources.


Check it.

Graph 1: External Access




Graph 2: Internal Access




Wat. :|


Notice that the “stair step” between each transfer is almost exactly 1 second.  I’m not good enough at math to appreciate with any accuracy the transfer performance difference here.  Instead, my plebian mind conjures terms like terribad, or silly-bad, to fill the gap.

The game is afoot!


The first thing I was able to confirm with certainty was that performance to the server itself was just like it was from the outside.  My problems apparel lie in the NetScaler config for this particular application, since James confirmed we’re not aware of any reports from any of the other applications running through it.

Despite no other reports of trouble, I checked NetScaler resource utilization first.  It was pretty uninteresting.


From here, I started pulling off SSL at the various steps of the config to identify any problems that might exist in my certificate build.


The first thing was to create an HTTP-only service group.




I then had to create a few different test virtual servers to host different configurations without affecting actual MyChart traffic.  James had already established a TCP test server, but I needed something a bit more fanciful.  Enter my test HTTP and SSL virtual servers.




I used different combinations of my SSL and HTTP virtual servers and service groups to better understand where in the transaction my slowness was manifesting itself.

Here’s the breakdown of the results:


Virtual Server
Service Group
Result (Pass = Normal/Fail = Slow)
HTTP
HTTP
Pass
HTTP
SSL
Pass
SSL
HTTP
Pass
SSL
SSL
Fail



Wat wat. :|


Not 100% sure what this means, I performed an NSTRACE while using the problematic config.

Weeding through an NSTRACE file sucks


It is, however, a useful exercise, as it reveals all manner of network gossip going on around the resources in question.  I tried to configure a trace that only looked at the traffic for a particular virtual server, but it gave me everything anyway.  


There are only a couple of things that really stand out to me from other applications’ traffic:

  • Lots of out-of-order TCP traffic
  • Tons of TLSv1 “Server Hello” and cipher renegotiation

What these two facts mean is still beyond me.  I’ve reached out to Chris Lancaster, Citrix Bro and NetScaler Extraordinaire, to meet with me later this afternoon and work through my methods.


To sum this thing so far, there is something in how the 7500’s configured that creates major per-transaction delays when brokering a client HTTPS connection to an HTTPS-enabled server.  That problem does not exist in isolation on either side; that is, only HTTPS from the client or only HTTPS to the server creates the issue.


Stay tuned.

Thursday, April 17, 2014

Tracking the clickahertz and jiggarams - Part 1

I assembled this list some time ago with every intention to turn it into actual Collections and Reports in PerfMon on all my IIS boxes. This has yet to happen, mainly because there were a lot of ongoing discussions about systems monitoring, Epic's SystemPulse product, BMHCC's enterprise SCOM implementation, and others.  And also because my mind - and, thus, sense of time and prioritization - is a hot mess.

Cobbled together from various source, this list represents a best first attempt at a universally applicable list of key Windows metrics regardless of a server's stated purpose.


My plan for "Part 2" involves a similar collection of metrics specific to IIS and ASP.NET applications. Those waters are considerably harder to navigate for me as a non-developer, but they're no less critical to answering questions about concurrent client connections, number of unique requests to an application, tracking memory leaks or other weirdness in worker processes, etc. I'll post that once I have it.  

Without further ado:

Memory

Counter Description Use Notes
Memory\Available Mbytes Available system memory in megabytes (MB) <10% considered low

<5% considered critically low

10MB negative delta per hour indicates likely memory leak
Memory\Committed Amount of committed virtual memory. Also called the "commit charge" in TaskManager
Memory\System Cache Resident Bytes Amount of memory consumed by the system file cache. Shows as "Metafile" in a memory explorer.

In 64-bit systems, memory addressing allows system file cache to consume almost all physical RAM if unchecked.
Memory\Pages Input/sec Rate of total number of pages read from disk to resolve hard page faults >10 considered high

Compare to Memory\Page Reads/sec to determine number of pages read into memory for each read operation.

Will be => Page Reads/sec, and large delta between them might indicate need for more RAM or smaller disk cache
Memory\Pages/sec Rate of total number of pages read from and written to disk to resolve hard page faults >1000 considered moderate, as memory may be getting low

>2000 considered critical, as system likely experiencing delays due to high reliance on slow disk resources for paging

Sum of Pages Input/sec and Pages Output/sec
Memory\Page Faults Total number of hard and soft page faults Pages/sec can be calculated as a % of Page Faults to determine what % of total faults are hard faults
Process(*)\Handle Count Total count of concurrent handles in use by the specified process. As this number constantly fluctuates, the delta between high and low values is most important.

Max - Min = !> 1000

Consistent large number or aggressive upward trend of handle count commonly causes memory leaks

  
Processor
Counter Description Use Notes
System\Processor Queue Length Total number of queued threads waiting to be processed for all processors. =>10 considered high for multi-processor system

PQL/n, where n = number of logical processors, gives per-core queue length.

If % Processor Time is high (=>90%) and per-core PQL is =>2, there is a performance bottleneck.

It is not uncommon to experience low % Processor Time and a per-core PQL => 2 depending on efficiency of the requesting application's threading logic
Processor\%Processor Time Percentage of time the specified CPU is executing non-idle threads. >75% considered moderate and should be closely monitored

>90% considered high, may begin to cause delays in performance

>95%-100% considered critically high and will cause major delays in performance

This value should be tracked per logical processor.

If % Processor Time is high (=>75%) while disk and network utilization is low, consider upgrading or adding processors.
Processor\%Interrupt Time Percentage of time the specified CPU is receiving and servicing hardware interrupts from network adapters, hard disks, and other system hardware Interrupt rates 30%-50% or higher may indicate a driver or hardware problem


Disk I/O

Counter Description Use Notes
PhysicalDisk\%Disk Time Percentage of time the selected disk spends servicing read or write requests. If this value is high relative to nominal CPU and network utilization figures, it is likely disk performance is a problem.
PhysicalDisk\Disk Writes/sec Average number of disk writes per second Used in conjunction with Disk Reads/sec, general indicator of disk I/O activity
LogicalDisk(*)\Avg Disk sec/Writes Average time in seconds a specified disk takes to process a read request >15ms considered slow and worth close evaluation

>25ms considered very slow and likely to negatively impact system performance
PhysicalDisk\Avg Write Queue Length Average number of write requests waiting to be processed Used in conjunction with Avg Read Queue Length, this gives an idea of disk access latency.

AWQL/n <= 4, where n is the number of disks in RAID.
PhysicalDisk\Disk Reads/sec Average number of disk reads per second Used in conjunction with Disk Writes/sec, general indicator of disk I/O activity
LogicalDisk(*)\Avg Disk sec/Reads Average time in seconds a specified disk takes to process a write request >15ms considered slow and worth close evaluation

>25ms considered very slow and likely to negatively impact system performance
PhysicalDisk\Avg Read Queue Length Average number of read requests waiting to be processed Used in conjunction with Avg Write Queue Length, this gives an idea of disk access latency.

ARQL/n <= 4, where n is the number of disks in use.


Network I/O

Counter Description Use Notes
Network Interface(*)\Total Bytes/sec Measure of total bytes sent and received per second for the specified network adapter If >50% of Current Bandwidth value under typical load, problems during peak times are likely.
Network Interface(*)\Current Bandwidth Estimate of the current bandwidth in bits per second (bps) available to the specified NIC.

Considered "nominal bandwidth" where accurate estimation impossible or where bandwidth doesn't vary
To estimate current NIC utilization, use the following formula:

Nic Utilization = ((Max Bytes Total/Sec * 8) / (Current Bandwith)) * 100

>30% NIC utilization on shared network considered high
Network Interface(*)\Output Queue Length Total number of threads waiting for outbound processing by the specified NIC >1 sustained considered high

>2 sustained considered critically high

Tuesday, April 15, 2014

From (F)ailure to (A-)wesome on SSLLabs

I'm once again shamelessly copy/pasting a new post on here.  I don't feel too ashamed of it, since I did actually write the original post.  It turns out we have MySites at work, and that the Blog feature is enabled!  I'll very likely be posting in both places as the life of this Blog thing draws onward.
 ***

There's been a lot of chatter about the Heartbleed SSL vulnerability in the last couple of weeks, and rightfully so. One place folks seem to love going is over to SSLLabs, since they have a server tester you can run to determine what kind of safety grade – A to F – you get.
At the outset, my tests of the BOC Link and MyChart sites generated giant, terrifyingly red "F" results. This was not due to Heartbleed, thank goodness, since the NetScalers do not use an affected version of OSSL, and none of my web servers use OSSL at all. What failed me instead was another, slightly older vulnerability: SSL renegotiation.
Both BOC Link and MyChart run behind a NetScaler virtual VPX appliance running v10.0.x of the software. Out of the box, NetScalers are configured to allow all SSL renegotiation in all forms, whether initiated from the client connection or the server. A quick check at the console will tell you the current status of the parameter:

> show ssl parameter
Advanced SSL Parameters
-----------------------
SSL quantum size: 8 kB
Max CRL memory size: 256 MB
Strict CA checks: NO
Encryption trigger timeout 100 mS
Send Close-Notify YES
Encryption trigger packet count: 45
Deny SSL Renegotiation NO
Subject/Issuer Name Insertion Format: Unicode
OCSP cache size: 10 MB
Push flag: 0x0 (Auto)
Strict Host Header check for SNI enabled SSL sessions: NO
PUSH encryption trigger timeout: 1 ms
Global undef action for control policies: CLIENTAUTH
Global Undef action for data policies: NOOP 


Citrix has a pretty handle article on what exactly the –denySSLReneg parameter is, what its options are, and how to change it. See it here.
Here's the command:

> set ssl parameter -denySSLReneg NONSECURE
Done 

By setting the Deny SSL Renegotiation option to NONSECURE, I've corrected the renegotiation vulnerability without (hopefully) creating any compatibility issues for our Link and MyChart users. This setting appears to be global, so affecting this change raised the scores of both sites from "F" to "A-" (RC4 ciphers, indeed!) simultaneously.

> show ssl parameter
Advanced SSL Parameters
-----------------------
SSL quantum size: 8 kB
Max CRL memory size: 256 MB
Strict CA checks: NO
Encryption trigger timeout 100 mS
Send Close-Notify YES
Encryption trigger packet count: 45
Deny SSL Renegotiation NONSECURE
Subject/Issuer Name Insertion Format: Unicode
OCSP cache size: 10 MB
Push flag: 0x0 (Auto)
Strict Host Header check for SNI enabled SSL sessions: NO
PUSH encryption trigger timeout: 1 ms
Global undef action for control policies: CLIENTAUTH
Global Undef action for data policies: NOOP

Monday, January 6, 2014

I hate dirty Application Logs - PerfMon counters and IIS Advanced Logging

I posted this one on Epic's UserWeb entity portal and have in my guilt for not posting in a while blatantly copied and pasted it here.  Methods aside, it's good info, especially if you're OCD about what shows up in your server's error logs like I apparently am.  Source link at the bottom.

Enjoy!

-------------
I have been configuring IIS Advanced Logging on all the IIS servers I've been building ahead of our go-live in January. It works swimmingly and solves a couple of problems I'd always hated about standard IIS logging:
1. The logging happens in real-time, instead of on a 3 minute delay.
2. You can add custom fields, like "Client-IP," that work a lot more smoothly with ADC's and load balancers that might otherwise mask information about a logged session.
3. You can include basic performance counters in your logs, like the W3WP CPU and memory utilization.

That #3 is why I'm posting here tonight. Even though those fields were disabled in my default log definition, I'd still get the following error for each metric in my Windows Application log:
____
Log Name: Application
Source: IIS Advanced Logging Module
Date: 12/17/2013 10:22:29 AM
Event ID: 1008
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Computer: ECEPPRINTC01.ad.bmhcc.org
Description:
Failed to initialize performance counter \Process(w3wp)\Private Bytes. Data for this performance counter data will not be recorded until the counter is available. PdhCollectQueryData: 0x0X800007D5.
Event Xml:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">;
<System>
<Provider Name="IIS Advanced Logging Module" />
<EventID Qualifiers="0">1008</EventID>
<Level>2</Level>
<Task>0</Task>
<Keywords>0x80000000000000</Keywords>
<TimeCreated SystemTime="2013-12-17T16:22:29.000000000Z" />
<EventRecordID>2513</EventRecordID>
<Channel>Application</Channel>
<Computer>MYSERVERNAME</Computer>
<Security />
</System>
<EventData>
<Data>\Process(w3wp)\Private Bytes</Data>
<Data>0X800007D5</Data>
</EventData>
</Event>
____
The short of it: this error showed up in the logs of those servers whose application pools I'd configured to use ApplicationPoolIdentity to authenticate instead of the old standby NetworkService. It occurs because ApplicationPoolIdentity has no rights to Performance Monitor, and so no access to log using Performance Monitor counters.

The fix is to add your application pool's identity ("IIS APPPOOL\APPPOOLNAME") to the built-in Performance Monitor Users group. Doing so eliminates the errors in Windows Application log, and makes the metrics actually show up correctly in the log (instead of "-" like they were originally).

Here's where I found the info:
http://blogs.microsoft.co.il/idof/2013/08/20/fixing-iis-advanced-logging-performance-counters-errors/