Performance Testing, LoadRunner Tips&Tricks

This site is moving to a bigger space @ LoadRunner TnT

General: Load Testing Objectives

There are two approaches from the user perspective on defining load testing objectives, either top-down (business perspective) or bottom-up (techincal perspective).
The top-down, meaning the transaction time take to submit a request and having it responded on the user's machine. This is a business perspective where they just want to know how the application is responding. The user do not want to know how many Hits/sec or Throughput but just want to know how much time was taken for a transaction (therefore the terminology of "transcation" must be aligned with you and the user).


The bottom-up, is a techincal perspective as you had pointed out such as the
ASP.NET transactions or the database transactions (or commits). This is usually a persepctive from a technical manager so they can configure the system accordingly (or whatever they are going to do their system).

Hopefully the above is helpful in understandin your users/clients a little bit more before giving them a satisfying answer.


General

General: How does LoadRunner license work?
General: Are client activities recorded by Vugen?
General: Scenario Execution
General: Detecting Memory Leaks Using LoadRunner
General: Vugen/Controller Crash or Abnormal Behavior
General: Virtualization with LoadRunner
General: Planning for Load Testing
General: Planning for Load Testing - Soliciting Requirements
General: Planning for Load Testing - Application Design
General: Planning for Load Testing - Protocols
General: Planning for Load Testing - Monitor Setup
General: Planning for Load Testing - Monitoring
General: Planning for Load Testing - Analyzing
General: Planning for Load Testing - Recommendations

Labels: , , ,


Bookmark this article now! AddThis Social Bookmark Button



technorati del.icio.us reddit digg

Scripts: Parameterization

Point to note that parameterization may result in data dependency which the tester should be aware of and handle it appropriately. Every load test should at least require parameterization. Parameterization includes passing different types of data into the application to emulate the real world users performing/entering different values.

What do clients want to achieve from parameterization?
  1. To test different data
  2. Just to load/emulate the data
Parameterization requires identifying with the clients which areas required data input. This will be useful when discussing in the Application Design. From there, they can prepare the data from the database which is a more efficient method.


Preparing the data

Usually after walking through the application, I will request the client to prepare the date via the database and export it out as an excel file. This will reduce the time in preparing lengthy excel or notepad files. From there, you can manipulate the data easily in either Excel or OpenOffice.org. After I’ve amended the files properly, I will save it as a .dat file and placed it in a shared folder for my scripts to access.

Once converted into .dat file, ensure that they have been delimited properly with the commas. Take note that Vugen allows different delimiters in the Parameter List.

The usual items to parameterize are username, password, dates, etc but not limited to the mentioned. However, there are also hidden values that are stored in web pages (e.g. in the form of AJAX) or HTML codes (e.g. hard-coded hidden input) which are captured by Vugen. As such, I will point it out to the clients and request them to prepare the data for me.

Having said that, it’s best to define what is correlation and parameterization or you may end up spending effort in the wrong track.


Placement of the Parameter file

When you created a parameter for each script, by default, it’s stored in the root of the script folder. That is, if you save the script as test_script.usr in C:\, the parameter file will be saved in C:\test_script. For every script that has a parameter, it will be saved in its individual script folder. This way is good for a single script but when multiple scripts using the same parameter file is involved in the load test, it will be advisable to centralize the parameter file.

What I do when I’ve prepared the data files is that I will place them in a central repository (folder) where all my scripts will access. This is tidier and provides a way of ensuring consistency of the parameter used for all scripts.


Why parameterize different data?

Why is there different effect on different data? A simple example is uploading of different sizes of files. If parameter A is uploading 2MB of file while parameter B is uploading 4MB of file, it will definitely generate a higher throughput on the latter parameter. This may be a concern of the client.

Another example is querying different items in a database. Parameter A may query and return a smaller result while parameter B returns a bigger result causing a higher throughput.


Not the Obvious Data Dependency

Point to note that parameterization may result in data dependency which the tester should be aware of and handle it appropriately.

Examples are username and password submission: A username login will require a valid password. This may seemed obvious for this situation. However, as pointed earlier about the hidden input, you will have to be cautious on it. As Vugen records whatever is been transmitted to the server, the hidden input are recorded as well. Therefore, you will need to check back with the application developers to find out more about the hidden values associated with each parameter value. If possible, request them to provide a list of the values in Excel file format for you to manipulate.


Which Settings are correct?

Parameterization settings vary depending on your applications. You may like to sequential it or randomize it. Ensure the settings are correct. Usually I defined them as unique.


Testing the data

There is various ways to test if the parameters are correct. You can perform the following:
  1. Replay the script in Vugen with one iteration
  2. Replay the script in Vugen with three iterations.
  3. Replay the script in Controller with one Vuser.
  4. Replay the script in Controller with three Vusers.
These will be sufficient to test out the validity of the parameters used. You can also defined at which point the parameter values are been used by defining the starting row to retrieve data.


Other tips & tricks

Some applications/business processes only allow one time use of the parameter. For example, deletion. As such, you should be aware that after each test, request the client or the application team to revert the changes for you so that there are data for you to delete.


Related Topics

Scripts: Duplicating Files on the Fly
Scripts: Step Download Timeout
Scripts: VIEWSTATE
Scripts: Auto or Manual Correlation?
Scripts: Remove Think Time
Scripts: Set Debug Mode in Script
Scripts: Replay Failure – Use Full Extended Log!
Scripts: Starting a new transaction during iterations
Scripts: Any compatibility issues after upgrading LoadRunner versions?


Labels: , ,


Bookmark this article now! AddThis Social Bookmark Button



technorati del.icio.us reddit digg

Scripts: Any compatibility issues after upgrading LoadRunner versions?

Personally, I encountered problem while porting the scripts recorded in 8.0 to 8.1 . This was actually pertaining to Citrix protocol where there is an additional parameter needed for the APIs. This will be particular to users that are using scripts in BPM (Business Process Management) of BAC (Business Availability Center). Take note that BPM uses the same recording tool, VuGen in LoadRunner, however at a version lower than the currently released LoadRunner Vugen).

If I didn't recall wrongly, the Citrix API ctrx_sync_on_bitmap, had an additional parameter, "CTRX_LAST" at the end of its parameter list in 8.1 as compared to 8.0. Since then, I haven't had any problem with other protocol scripts so far.

Anyway, my advice are as the followed:
  1. Read the upgrade readme file.
  2. Clone an image prior the upgrade. In this way, you can revert back to the older version.
  3. Test the previously recorded scripts from the older version.

Do you have any incompatibility issues of scripts before? Feel free to share with us!


Related Topics

Scripts: Duplicating Files on the Fly
Scripts: Step Download Timeout Scripts: VIEWSTATE
Scripts: Auto or Manual Correlation?
Scripts: Remove Think Time
Scripts: Set Debug Mode in Script
Scripts: Replay Failure – Use Full Extended Log!
Scripts: Starting a new transaction during iterations


Labels: , , , , , , ,


Bookmark this article now! AddThis Social Bookmark Button



technorati del.icio.us reddit digg

Understanding Network: Performance Measurements

Evaluating path performance means doing three types of measurements. measurements will give you an idea of the hardware capabilities of your network, such as the maximum capacity of your network. BandwidthThroughput measurements will help you discover what capacity your network provides in practice, i.e., how much of the maximum is actually available. Traffic measurements will give you an idea of how the capacity is being used.

Performance Measurements

Two factors determine how long it takes to send a packet or frame across a single link. The amount of time it takes to put the signal onto the cable is known as the transmission time or transmission delay. This will depend on the transmission rate (or interface speed) and the size of the frame. The amount of time it takes for the signal to travel across the cable is known as the propagations time or propagations delay. Propagation time is determined by the type of media used and the distance involved.

Once we move to multi-hop paths, a third consideration enters the picture – the delay introduced from processing packets at intermediate devices such as routers and switches. This is usually called the queuing delay since, for the most part, it arises from the time packets spend in queues within the device. The total delay in delivering a packet is the sum of these three delays. Transmission and propagation delays are usually quite predictable and stable. Queuing delays, however, can introduce considerably variability.

The term bandwidth is typically used to describe the capacity of a link.

Throughput is a measure of the amount of data that can be sent over a link in a given amount of time. Throughput estimates, typically obtained through measurements based on the bulk transfer of data, are usually expressed in bits per second or packets or second. Throughput is frequently used as an estimate of the bandwidth of a network, but bandwidth and throughput are really two different things. Throughput measurement may be affected by considerable overhead that is not included in bandwidth measurements. Consequently, throughput is a more realistic estimator of the actual performance you will see.

Throughput is generally an end-to-end measurement. When dealing with multi-hop paths, however, the bandwidths may vary from link to link. The bottleneck bandwidth is the bandwidth if the slowest link on a path, i.e., the link with the lowest bandwidth.


The above was extracted from the book, "Network Troubleshooting Tools" by Joseph D. Sloan.


Related Topics

Content Page - General


Labels: , , ,


Bookmark this article now! AddThis Social Bookmark Button



technorati del.icio.us reddit digg

Understanding Network: How traceroute works?

The program was written by Van Jacobson and others. It is based on a clever use of the Time-To-Live (TTL) field in the IP packet’s header. The TTL field is used to limit the life of a packet. When a router fails or is mis-configured, a routing loop or circular path may result. The TTL field prevents packets from remaining on a network indefinitely should such a routing loop occurs. A packet’s TTL field is decremented each time the packet crosses a router on its way through a network. When its value reaches 0, the packet is discarded rather forwarded. When discarded, the ICMP TIME_EXCEEDED message is sent back to the packet’s source to inform the source that the packet was discarded. By manipulating the TTL field original packet, the program traceroute uses information from these ICMP messages to discover paths through a network.

Traceroute sends a series of UDP packets with the destination address of the device you want a path to. * By default, traceroute sends sets of three packets to discover each hop. Traceroute sets the TTL field in the first three packets to a value of 1 so that they are discarded by the first router on the path. When the ICMP TIME_EXCEEDED messages are returned by that router, traceroute records the source IP address of these ICMP messages. This is the IP address of the first hop on the route to the destination.

Next, three packets are sent with their TTL field set to 2. These will be discarded by the second router on the path. The ICMP messages returned by this router reveal the IP address of the second router on the path. The program proceeds in this manner until a set of packets finally has a TTL value large enough so that the packets reach their destination.

Typically, when the probe packets finally have an adequate TTL and reach their destination, they will be discarded and an ICMP PORT_UNREACHABLE message will be returned. This happens when traceroute sends all its probe packets with what should be invalid port numbers, i.e., port numbers that aren’t usually used. To do this, traceroute starts with a very large port number, typically 33434, and increments this value with each subsequent packet. Thus, each of the three packets in a will have three different unlikely port numbers. The receipt of ICMP PORT_UNREACHABLE messages is the signal that the end of the path has reached.

Should a packet be lost, an asterisk is printed in the place of the missing time. In some cases, all three times may be replaced with asterisks. This can happen for several reasons. First, the router at this hop may not return ICMP TIME_EXCEEDED messages. Second, some older routers may incorrectly forward packets even though the TTL is 0. Third possibility is that ICMP messages may be given low priority and may not be returned in a timely fashion. Finally, beyond some point of the path, ICMP packets may be blocked.

Options

-n: disable name resolution.

-v: enable verbose option which will log source and packet sizes of the probes will be reported for each packet.

-m: define maximum number of hops where default is 30 hops before halting.

-p: traceroute usually receives a PORT_UNREACHABLE message when it reaches its final destination because it uses a series of unusually large port numbers as the destination ports. Should the number actually match a port that has a running service, the PORT_UNREACHABLE message will not be returned. This is rarely a problem since three packets are sent with different port numbers, but, if it is, the option lets you specify a different starting port so these ports can be avoided.

-q: traceroute sends three probe packets for each TTL value with a timeout of three seconds for replies. This can be changed using –q option.

-w: define the default timeout value for the probe packets.


The above was extracted from the book, "Network Troubleshooting Tools" by Joseph D. Sloan.


Related Topics

Content Page - General


Labels: , , , , ,


Bookmark this article now! AddThis Social Bookmark Button



technorati del.icio.us reddit digg

Understanding Network: How Ping Works?

One network device sends a request for a reply to another device and records the time the request was sent. The device receiving the request sends a packet back. When the reply is received, the round-trip time for packet propagation can be calculated. The receipt of a reply indicates a working connection. This elapsed time provides an indication of the length of the path. Consistency among repeated queries gives an indication of the quality of the connection. Thus, ping answers the two basic questions. Do I have a connection? How good is that connection?

Clearly, for the program to work, the networking protocol must support this query/response mechanism. The ping program is based on Internet Control Message Protocol (ICMP), part of the TCP/IP protocol. ICMP was designed to pass information about network performance between network devices and exchange error messages. It supports a wide variety of message types, including query/response mechanism.

The normal operation of ping relies on two specific ICMP messages, ECHO_REQUEST and ECHO_REPLY, but it may response to ICMP messages other than ECO_REPLY when appropriate. In theory, all TCP/IP-based network equipment should respond to an ECHO_REQUEST by returning the packet to the source, but this is not always the case.

Interpreting Results

In different flavors of ping, results vary. However, for each packet we are given the size and source of each packet, an ICMP sequence number, a Time-To-Live (TTL) counter, and the round-trip times. Of course, the sequence number and the round trip time are the most revealing when evaluating basic connectivity.

When each ECHO_REQUEST packet is sent, the time the packet is sent is recorded in the packet. This is copied into the corresponding ECHO_REPLY packet by the remote host. When an ECHO_REPLY packet is received, the elapsed time is calculated by comparing the current time to the time recorded in the packet, i.e., the time the packet was sent. This difference, the elapsed time, is reported along with ECHO_REPLY packet is received that matches a particular sequence number, that packet is resumed lost. The size and the variability of elapsed times will depend on the number and speed of intermediate links as well as the congestion on those links.

It may seem that the TTL field could be used to estimate the number of hops on a path. Unfortunately, this is problematic. When a packet is sent, the TTL field is initialized and is subsequently decremented by each router along the path. If it reaches zero, the packet discarded. This imposes a finite lifetime on all packets ensuring that, in the event of a routing loop, the packet won’t remain on the network indefinitely. Unfortunately, the TTL field may or may not be reset at the remote machine and, if reset, there is little consistency in what it is set to. Thus, you need to know very system-specific information to use the TTL field to estimate the number of hops on a path.

Options
  • -c: allow you to specify the number of packets you want to send.
  • -f: used to flood packets onto network. This option is to send as fast as the receiving host can handle them which is useful for stress testing a link or to get some indication of the comparative performance of interfaces. This is restricted to root.
  • -l: used to flood packets onto network. It takes a count and sends out that many packets as fast as possible which eventually falls back to normal mode. This could be used to see how the router handles a flood of packets. This is restricted to root.
  • -i: allows the user to specify the amount of time in seconds to wait between sending consecutive packets.
  • -n: restricts output to numeric form which is useful if you have DNS problems.
  • -v: used for verbose output.
  • -q, -Q: used for quiet output.
  • -s: specifies how much data to send. If set too small, less than 8, there won’t be space in the packet for a time-stamp. Setting the packet size can help in diagnosing a problem caused by path Maximum Transmission Unit (MTU) settings (the largest frame size that can be sent on the path) or fragmentation problems. (Fragmentation is dividing data among multiple frames when a single packet is too large to cross a link. It is handled by the IP portion of the protocol stack.) The general approach is to increase packet sizes up to the maximum allowed to see if at some point you have problems. When this option isn’t used, ping defaults to 64 bytes, which may be too small a packet to reveal some problems. Also, remember that ping does not count the IP or ICMP header in the specified length so that your packets will be 28 bytes larger than you specify.
You could conceivably see MTU problems with protocols, such as PPP, that use escaped characters as well. With escaped characters, a single character may be replaced by two characters. The expansion of escaped characters increases the size of the data frame and can cause problems with MTU restrictions or fragmentation.

  • -p: allows you to specify a pattern for the data included within the packet after the timestamp.

The above are not the entire list of options. As such, be sure to consult the documentation if things don’t work as expected.

Using Ping

To isolate problems with ping, you will want to run it repeatedly, changing your destination address so that you work your way through each intermediate device to your destination. You should begin with your loopback interface. Use either localhost or 127.0.0.1. Next, ping your interface by IP number. (Run ifconfig –a if in doubt.) If either of these fails, you know that you have a problem with the host.

Next, try a host on a local network that you know is operational. Use its IP address rather than its hostname. If this fails, there are several possibilities. If other hosts are able to communicate on the local network, then you likely have problems with your connection to the network. This could be your interface, the cable to your machine, or your connection to a hub or switch. Of course, you can’t rule out configuration errors such as media type on the adapter or a bad IP address or mask.

Next, try to reach the same host by name rather than number. If this fails, you almost certain to have problems with name resolution.

Try reaching the near and far interfaces of the router. This will turn up any basic routing problems you may have on your host or connectivity problems getting to your router.

If all goes well here, you are ready to ping remote computers. (You will need to know the IP address of the intermediate devices to do this test. If in doubt, use traceroute to determine the machines.) Realize, of course, that if you start having failures at this point, the problem will likely lie beyond your router. For example, your ICMP ECHO_REQUEST packets may reach the remote machine, but it may not have a route to your machine to use for the ICMP ECHO_REPLY packets.

When faced with failure at this point, your response will depend on who is responsible for the machines beyond your router. If this is still part of your network, you will want to shift your tests to machines on the other side of the router and try to work in both directions.

If these machines are outside your responsibility or control, you will need to enlist the help of the appropriate person. Before you contact this person, you should collect as much as information as you can. There are three things you may want to do. First, go back to using IP numbers if you have been using names. As said before, if things start working, you have a name resolution problem.

Second, if you were trying to ping a device several hops beyond your router, go back to closer machines and try to zero in on exactly where you first encountered the problem.

Finally, be sure to probe form more than one machine. While you may have a great deal of confidence in your local machine at this point, your discussion with the remote administrator may go much more smoothly if you can definitely say that you are seeing this problem from multiple machines instead of just one.

Problems with Ping

The program does not exist in isolation, but depends on the proper functioning of other elements of the network. Ping usually depends upon ARP and DNS. As previously mentioned, if you are using a hostname rather than an IP address as destination, the name of the host will have to be resolved before ping can send any packets. You can bypass DNS by using IP address.

It is also necessary to discover the host’s link level address for each host along the path to the destination. Although this is rarely a problem, should ARP resolution fail, then ping will fail. You could avoid this problem, in part; by using start ARP entries to ensure that the ARP table is correct. A more common problem is that the time reported by ping for the first packet sent will often be distorted since it reflects both transit times and ARP resolution times. On some networks, the first packet will often be lost. You can avoid this problem by sending more than one packet and ignoring the results for the first packet.

The above was extracted from the book, "Network Troubleshooting Tools" by
Joseph D. Sloan.


Related Topics

Content Page - General

Labels: , , , , , , , ,


Bookmark this article now! AddThis Social Bookmark Button



technorati del.icio.us reddit digg

General: Planning for Load Testing - Recommendations

After identifying trends, it’s time to propose an action plan, proposal or recommendation to your client.

This maybe just telling the client that “Dear client, your application is running at this rate, I would recommend to re-look the SLAs with your users.” or “Dear client, the server housing the application is encountering processor contention. I would like to recommend you to distribute the load across with a new server (or processor).” This is of course tied to the initial requirements and the limitations the clients are bounded in.

At this point if they have follow up actions to improve the system, do keep a record of the last changed parameter on the server. This will be useful for you to track the changes over time on their application and determined if the changed had enhanced the application. Unless you are part of the server team, you might not be involved in the tuning process.

Once the changes have been made, conduct a load test to verify the changes.


Related Topics

General: Planning for Load Testing
Content Page - General

Labels: , ,


Bookmark this article now! AddThis Social Bookmark Button



technorati del.icio.us reddit digg

General: Planning for Load Testing - Analyzing

This is the stage where it intertwined with the monitoring. At this stage, similarly to Monitoring, it is important to have a sound knowledge of the server or database internal workings. With that you will know what to further investigate and what the graph represents.

A graph looks great with the lines flying up and down but that is information. You will need to have a lot of experience to translate that information to useful knowledge. This will be done with merging and correlating of the load generated against the utilization of resources. You must be able to “see” the trend and identify possible bottlenecks.

I would recommend acquiring lots of information of the server (either the software or hardware perspective) you are monitoring as this is required for your analyzing. You can refer a previous consolidated article, "Content Page - General" which provides a proportion of basic information that will be useful to you in facilitating the analyzing.

Other than the knowledge, you will have to refer back to the initial requirements set down by your clients. They can comprised from a top-down interest/approach such as transaction response time to a bottom-up technical interest/approach such as the utilization of the server/network resources. In any case, show them what you've gathered in their best interest.

From there, you will know what to look out for in the graphs, such as amount of users that were actually generated and if the application was able to support the defined transaction response time. Or the utilization across all servers met (either under-utilized, over-utilized or unevenly balanced).

Some combination of graphs that will be useful in analyzing are as followed (but not limited to the list):

Vuser - System Resource (Windows or Unix) or any other resource graphs

Useful in determining/describing the capacity that the resource (server) can work with the generated load.

Vuser - Transaction Response Time

Useful in determining/describing the transaction response time over time when vusers are generated into the application. This will determine the point or amount of users that the server can handle with respect to the defined transaction response time.

Vusers - Transactions Per Seconds

Useful in determining/describing the amount of transactions that were generated with respect to the amount of users generated.

Vusers - Errors

Useful in determining/describing the breaking point of the application where transactions start to fail (consistently) with respect to the amount of users generated.

Vusers - Errors - System Resource

Useful in determining/describing if the resources (servers) are experiencing difficulties handling the generated load with respect to the amount of errors.

Transaction Response Time (Percentile)

Useful in determining/describing the overall performance of the transactions in terms of percentage.

System Resource - System Resource

Useful in determining if the resources (servers) have been properly distributed in load. Usually, I will also ask my clients to verify the load balancer logs and the application logs on the activities.


Please feel free to add or comment if you have a good combination of graphs!



Related Topics

Labels: , ,


Bookmark this article now! AddThis Social Bookmark Button



technorati del.icio.us reddit digg

General: Planning for Load Testing - Monitoring

I separated the Monitoring from the Monitor Setup because I feel that the setup is prior the load test runs. In this stage, what we are concerned are the counters and metrics to monitor. If you are starting off without any knowledge or history of the system such as performance issues, it’s advisable to monitor the 4 main categories, namely Processor, Memory, Disk and Network. Once the load test completes, you should have an overview of the application performance. From there, go to a 2nd level of monitoring till the nth level until you are satisfied with the investigation.

For example, on the 1st load test, you observed that the page faults are occurring at a level higher than the accepted threshold indicating memory problems. From there, you may want to investigate further on the cause of the page fault such as, if it is a hard page fault or soft page fault. Therefore, you may like to monitor the Transition Faults, Pages Input and Pages Output to answer your doubts.

Of course, to know what to investigate further requires knowledge of the application, server, network, and what the counters and metrics mean. This is also assisted if you have a good understanding of the internals working of the component.


Related Topics

Content Page - Monitors



Labels: , ,


Bookmark this article now! AddThis Social Bookmark Button



technorati del.icio.us reddit digg

Monitors: Memory Glossary

This is a compilation of memory-related counters. Do feel free to revert or comment should you have additional information on the counter.


Available Bytes
  • Process Working Set growth becomes constrained when Available Bytes <>

Pool Nonpaged Bytes
  • The system’s non-pageable (fixed) memory.


Pool Paged Resident Bytes
  • The OS‘s pageable memory that is currently resident in RAM.


System Code Resident Bytes
  • [Coming soon]


System Driver Resident Bytes
  • Total System Resident Bytes = Pool Nonpaged Bytes + Pool Paged Resident Bytes + System Code Resident Bytes + System Driver Resident Bytes + System Cache Resident Bytes.


System Cache Resident Bytes
  • The current amount of RAM used for the file cache.


Page Faults/sec
  • Can be a grossly misleading number. Page Faults/sec = “soft” Transition Faults + application file Cache Faults + demand zero faults + hard page faults.


Page Reads/sec
  • This counter measures the number of requests to the I/O manager to retrieve pages of memory from the disk. Despite the name of this counter, it measures requests, not pages; a request can be for more than a single page.
  • Equivalent to hard page fault rate.


Page Writes/sec
  • This counter measures the number of requests to the I/O manager to write pages of memory to the disk. Again, each request can be for more than a single page.
  • Updated “dirty” pages must be flushed to disk before they can be reused by a different application.


Pages Input/sec
  • This counter measures the number of pages read from the disk per second. Combing this counter with the Page Reads/sec counter can tell you how many pages are retrieved per request.
  • Calculate the bulk paging rate: Pages Input/sec / Pages Read/sec


Pages Output/sec
  • This counter measures the number of pages written to the disk per second. Combining this counter with the Page Writes/sec counter can tell you how many pages are written per request.
  • Try to limit Pages Input/sec + Pages Output/sec to 10 – 20% of total disk bandwidth, if possible. Disk bandwidth absorbed for paging operations is unavailable for application processes.


Pages/sec
  • This counter measures the total number of pages read and written to the disk. This counter represents the sum of Pages Input/sec and Pages Output/sec. Using this counter along with the Disk Bytes/sec counter from the Physical Disk Object, you can determine what portion of the data transferred to the disk is due to memory access, and what portion is due to file system access.


Committed Bytes
  • Represents virtual memory pages backed in either RAM or secondary storage (paging files). Calculate a virtual: real memory contention index = Committed Bytes / Total RAM. Consider adding RAM when this ration starts to approach 2:1.


Commit Limit
  • Maximum number of virtual memory pages that can be allocated without extending the paging file(s).


% Committed Bytes in Used
  • Committed Bytes / Commit Limit. Consider adding RAM when consistently > 70% on a server.


Cache Bytes
  • Actually, the system address space working set, but includes the file cache. The sum of Pool Paged Resident Bytes + System Code Resident Bytes + System Driver Resident Bytes + System Cache Resident Bytes.


Transition Faults/sec
  • “Soft” page faults resolved without having to access the disk.


Cache Faults/sec
  • Normal application file I/O operations are diverted to use the paging subsystem. Each file cache fault leads to a physical disk read I/O operation.


Demand Zero Faults/sec
  • The rate at which applications require brand new pages.


Write Copies/sec
  • Private Copy on Write pages from shared DLLs.


Pool Paged Bytes
  • Calculate a virtual: real memory contention index = Pool Paged Resident Bytes / Pool Paged Bytes. Compared to Page Reads/sec to anticipate real memory bottlenecks.



Related Topics

Content Page - General
Content Page - Monitors
Monitors: What metrics/counters to monitor for Windows System Resource?
Virtual Addressing
Page Fault Resolution
Page Fault Resolution (Illustration)
Performance Concerns
Virtual Memory Shortage Alerts
Available Bytes
LRU
System Working Set
Detecting Memory Leaks
Measuring Memory Utilization

Labels: , , , ,


Bookmark this article now! AddThis Social Bookmark Button



technorati del.icio.us reddit digg



Powered by Google

Enter your email address:

Delivered by FeedBurner


Add to Technorati Favorites


XML

Powered by Blogger

make money online blogger templates

Powered by FeedBurner

Blog Directory

Top Blogs

Software Blogs -  Blog Catalog Blog Directory





© 2007 Performance Testing, LoadRunner Tips&Tricks | Blogger Templates by GeckoandFly.
No part of the content or the blog may be reproduced without prior written permission.
Learn how to make money online | First Aid and Health Information at Medical Health