2. Hardware for Processing

Processing takes place within the central processing unit (CPU) and for this reason the CPU is a major focus in this section. We consider the CPU and its related components, CPU design factors to increase processing speed, and also various historical and current trends in CPU design.

THE CPU AND ITS RELATED COMPONENTS

In reality there are a large variety of different CPU designs that include different components and different methods of operation, however they are all based on a series of basic components and operational principles. Our aim in this section is to introduce the major components within a simple typical CPU. Throughout later discussions we introduce various modifications used to improve this basic design. 

The designs of all CPUs in common usage today are derived from the “stored program concept” originally described by Jon von Neumann in 1945. This concept, as the name suggests, enabled not just data but also program instructions to be stored and hence reused. The “stored program concept” is a logical description of processing, it does not address the physical materials or design required to implement the concept. As a consequence the components within von Neumann’s stored program concept are functional components rather than physical components; that is, the components are identified according to the tasks they perform rather than because they are physically separate. So what are these logical or functional components and how do they operate to process data? 



Control Unit (CU)

  • The control unit directs the operation of other components. 
  • It interprets instructions and ensures they are performed in the correct sequence and at the correct time. 
  • To perform these tasks the control unit includes various temporary storage areas, called registers. The instruction register contains the instruction about to be executed and the program counter contains the address in main memory of the next instruction. The system clock on the motherboard generates equally spaced signals; these signals are used to ensure operations are performed at the correct time. The control unit and the arithmetic logic unit combine to form the central processing unit.


Arithmetic Logic Unit (ALU)

The ALU is where the actual processing of data occurs; in essence the ALU performs 
all processing information processes. The ALU knows how to execute a relatively
small number of instructions; however it only does so when directed by the control
unit. There are a variety of general-purpose registers closely associated with and
accessible to the ALU. These registers are used to hold data prior to, during and after
execution. The accumulator is the most crucial register; it is used during execution
and then after execution has completed the new value or result is held in the
accumulator.
The word arithmetic refers to basic mathematical operations such as addition and
subtraction. The word logic refers to logical operations such as greater than, equal to
and less than. Each of these operations is performed on binary data using binary
instructions.


Main Memory

Both data and instructions are stored in main memory prior to and after processing.
Main memory is primarily RAM, however modern processors also include various
types or levels of cache to improve performance; cache is logically part of main
memory. Each location in main memory has a unique address. These addresses are
used to locate the next instruction to be processed and also to locate data required for
processing.

Cache Memory

Cache memory, also called CPU memory, is random access memory (RAM) that a computer microprocessor can access more quickly than it can access regular RAM. This memory is typically integrated directly with the CPU chip or placed on a separate chip that has a separate bus interconnect with the CPU.

The basic purpose of cache memory is to store program instructions that are frequently re-referenced by software during operation. Fast access to these instructions increases the overall speed of the software program.

Cache memory levels explained
Cache memory is fast and expensive. Traditionally, it is categorized as "levels" that describe its closeness and accessibility to the microprocessor:
  • Level 1 (L1) cache is extremely fast but relatively small, and is usually embedded in the processor chip (CPU).
  • Level 2 (L2) cache is often more capacious than L1; it may be located on the CPU or on a separate chip or coprocessor with a high-speed alternative system bus interconnecting the cache to the CPU, so as not to be slowed by traffic on the main system bus.
  • Level 3 (L3) cache is typically specialized memory that works to improve the performance of L1 and L2. It can be significantly slower than L1 or L2, but is usually double the speed of RAM. In the case of multicore processors, each core may have its own dedicated L1 and L2 cache, but share a common L3 cache. When an instruction is referenced in the L3 cache, it is typically elevated to a higher tier cache



Input/Output

In this course we refer to an input function as a collecting information process and an
output function as a displaying information process. Both these functions allow data
to enter and exit the system.

Secondary Storage

In terms of processing, secondary storage is used to store and retrieve both data and
instructions. The ability to store and retrieve instructions, or programs, in a similar
manner to data is the basis of von Neumann’s stored program concept. This ability
allows computers to easily execute programs multiple times. It is also the reason that
computers are multi-purpose machines; that is, they can easily run different programs
that solve different problems.


Comments