Performance Testing, LoadRunner Tips&Tricks

This site is moving to a bigger space @ LoadRunner TnT

General: Understanding Processor

Windows is a multiprogramming OS, which means that it manages and selects among multiple programs that can all be active in various stages of execution at the same time. The displaceable unit in Windows, representing the application or system code to be executed, is the thread. The Scheduler running inside the Windows OS kernel keeps track of each thread in the system and points the processor hardware to threads that are ready to run.

The basic rationale for multiprogramming is that most computing tasks do not execute instructions continuously. After a program thread executes for some period of time, it usually needs to perform an I/O operation like reading information from the disk, printing characters on a printer, or drawing data on the display. While the program is waiting for this I/O function to complete, it does not need to hand on to the processor. An OS that supports multiprogramming saves the status of a program that is waiting, restores its status when it is ready to resume execution, and finds something else that can run in the interim.

Because I/O devices are much slower than the processor, I/O operations typically take a long time compared to CPU processing, A single I/O operation to a disk may take 10 milliseconds, which means that the disk is only capable of executing perhaps 100 such operation per seconds. Printers, which are even slower, are usually rated in pages printed per minute. In contrast, processors might execute an instruction every one or two clock cycles.

In a multiprogramming OS, programs execute until they block, normally because they are waiting for an external event to occur. When this awaited event finally does occur, interrupt processing makes the program ready to run again.

Multiprogramming introduces the possibility that a program will encounter delays waiting for the processor while some other program is running. In Windows following an interrupt, the thread that was notified that an event it was waiting on occurred usually receives a priority boost from Windows Scheduler. The result is that the thread that was executing when the interrupt occurred is often pre-empted by a higher-priority thread following interrupt processing. This can delay thread execution, but does tend to balance processor utilization across CPU- and I/O bound threads.

Due to the great disparity between the speed of the devices and the speed of the processor, individual processing threads are usually not a very efficient use of the processor if allowed to execute continuously. It is important to understand that multiprogramming actually slows down individual execution threads because they are not allowed to run uninterrupted from start to finish. In other words, when the waiting thread becomes ready to execute again, it may well be delayed because some other thread is in line ahead of it. Multiprogramming represents an explicit trade off to improve overall CPU throughput, quite possibly at the expense of the execution time of any individual thread.
Related Topics

Labels: , , , , ,


Bookmark this article now! AddThis Social Bookmark Button



technorati del.icio.us reddit digg

0 Responses to “General: Understanding Processor”

Post a Comment


Powered by Google

Enter your email address:

Delivered by FeedBurner


Add to Technorati Favorites


XML

Powered by Blogger

make money online blogger templates

Powered by FeedBurner

Blog Directory

Top Blogs

Software Blogs -  Blog Catalog Blog Directory





© 2007 Performance Testing, LoadRunner Tips&Tricks | Blogger Templates by GeckoandFly.
No part of the content or the blog may be reproduced without prior written permission.
Learn how to make money online | First Aid and Health Information at Medical Health