Tuesday, July 30, 2013

CPU,Processor,Core,Thread and Clustered

Now a days CPU and Processor are almost same thing.

A dual-core CPU is like having two CPUs inside one chip. But, they both have to access the motherboard resources through the one set of pins.

A processor "core" is a physical processing unit on the die (the silicon wafer - the actual chip). Older CPUs have only one core per chip. For these, to get two processing units (cores) you must have a motherboard with two separate CPU sockets. With two physical CPUs, communication between the CPUs has to go out one CPU socket, across the motherboard support circuitry, and in through the socket of the second CPU. This is considerably slower than the speed at which things take place inside the circuitry on the same chip. So to increase processing speed, and to lower manufacturing and end-user costs, individual CPUs were designed to have more than one processing unit (cores) on the chip. So a 2-core CPU is very much like having two separate CPUs but is less expensive and can often be faster than two single-core CPUs of the same capability because of the increased communication speed between them, and because they can share common circuitry such as a cache.

Thread:
In a simplistic view, a thread (a sequence of steps to be executed) is constructed in a "pipeline" and then "scheduled" for execution by a CPU core. Once a thread is scheduled, the CPU core is executing the pipelined instructions. Frequently, while the thread is executing, the CPU needs more information than just the series of instructions: it needs data. These data values may be only a few nanoseconds away in some RAM memory location or they may be several thousand nanoseconds away (milliseconds) on a disk drive. When a core has to stop executing the thread while it waits to fetch the external data, time is lost. No other thread can be executed while the waiting thread is scheduled on that core (the thread is given an allotment of time and not kicked out early).

This is similar to a single lane bridge. Only one car can use the bridge at a time. If a driver stops to take a scenic picture, no other car can use the bridge until the driver gets his picture and moves off the bridge. To prevent complete closure of the core, the CPU has a mechanism to swap an entire thread off of the core if it experiences a serious problem (like a car with a breakdown), but that is a very costly process and is not used if the thread is just waiting for I/O to complete so that it may continue executing. Like the car analogy, forcing a hung thread out of a core prematurely is like waiting for a tow truck to get the broken down car off the bridge. It takes quite a while, but it is still quicker than repairing the car on the bridge.

A multi-threaded core is like a bridge that has a passing lane. When the driver on the bridge stops to take a picture, the car behind him can still use the bridge by passing the stopped car using the passing lane. Think of it as two different pipelines where thread executions are constructed. Still only one can be scheduled to a core at a time. But if the executing thread is waiting to fetch I/O, the other thread can jump in the core and get a little CPU time in while the thread assigned to the core is waiting.

This allows what may look like two cores (two pipelines executing at the same time). BUT IT IS NOT. Still only one thread at a time can be executed by the core at a time. This just allows another thread to execute during the waiting period of the first thread. Depending upon specific application design, data needs, I/O, etc, multithreading can actually decrease performance or may increase performance up to about 40% (sited from Intel and Microsoft sources).

Intel CPUs support multithreading, but only two threads per CPU. AMD CPUs do not support multithreading

Clustered:


A server cluster is a group of independent servers running Windows Server 2003, Enterprise Edition, or Windows Server 2003, Datacenter Edition, and working together as a single system to provide high availability of services for clients. When a failure occurs on one computer in a cluster, resources are redirected and the workload is redistributed to another computer in the cluster. You can use server clusters to ensure that users have constant access to important server-based resources.

Advantage:

Application and service failures, which affect application software and essential services.
System and hardware failures, which affect hardware components such as CPUs, drives, memory, network adapters, and power supplies.
Site failures in multisite organizations, which can be caused by natural disasters, power outages, or connectivity outages.

Single Quorum Device Cluster:

The most widely used cluster type is the single quorum device cluster, also called the standard quorum cluster. In this type of cluster there are multiple nodes with one or more cluster disk arrays, also called the cluster storage, and a connection device, that is, a bus. Each disk in the array is owned and managed by only one server at a time. The disk array also contains the quorum resource. The following figure illustrates a single quorum device cluster with one cluster disk array.

Single Quorum Device Cluster:

Because single quorum device clusters are the most widely used cluster, this Technical Reference focuses on this type of cluster.

No comments: