Unit:4 Basic Computer Architecture Notes (Second Semester Second Parts)

Unit:4 Basic Computer Architecture Notes (Second semester Second parts)

Unit:4_Basic_Computer_Architecture_Notes_(Second_Semester_Second_parts)

Introduction:  


Computer architecture refers to the design and organization of the various components of a computer system to ensure that they work together efficiently to execute instructions and perform tasks. It involves the arrangement and interconnection of hardware components, including the central processing unit (CPU), memory, input/output devices, storage devices, and the system bus.

Computer architecture encompasses several key aspects, including the instruction set architecture (ISA), which defines the set of instructions that a CPU can execute, the data representation and formats used for computation, and the memory hierarchy that determines how data is stored and accessed at different speeds and capacities.

The goal of computer architecture is to create a system that can execute a wide range of instructions and applications with optimal performance, reliability, and efficiency. It involves decisions about the organization of the CPU, the pathways for data and control signals, the type of memory used, and the communication between various components. Different computer architectures may employ diverse strategies, such as parallel processing, pipelining, and multiple levels of caching, to enhance performance.

History of computer architecture:

Computers have gone through many changes over time. The first generation of computers started around 1940 and since then there have been five generations of computers until 2023. Computers evolved over a long period of time, starting from the 16th century, and continuously improved themselves in terms of speed, accuracy, size, and price to become the modern day computer.

The different phases of this long period are known as computer generations. The first generation of computers was developed from 1940-1956, followed by the second generation from 1956-1963, the third generation from 1964-1971, the fourth generation from 1971 until the present, and the fifth generation are still being developed.

First Generation of Computers

The first generation used vacuum tube technology and were built between 1946 and 1959. Vacuum tubes were expensive and produced a lot of heat, which made these computers very expensive and only affordable to large organizations. Machine language was the programming language used for these computers, and they could not multitask.

The ENIAC was the first electronic general-purpose computer that used 18,000 vacuum tubes and was built in1943 for war-related calculations. Examples of the first generation include EDVAC, IBM-650, IBM-701, Manchester Mark 1, Mark 2, etc.


Here are two of the main advantages of first-generation:

 The first generation was tough to hack and was quite strong.

  The first generation could perform calculations quickly, in just one-thousandth of a second.

Here are two of the main disadvantages of first-generation:.

 They consumed high amounts of energy/electricity. 

  They were not portable due to their weight and size.

Second Generation of Computers

The second generation of computers was developed in the late 1950s and 1960s. These computers replaced vacuum tubes with transistors making them smaller, faster and more efficient. This was done as transistors were more reliable than vacuum tubes, required less maintenance and generated less heat.

Second-generation computers were smaller and more portable, making them accessible to a wider audience. Magnetic core memory was also introduced in this generation, which was faster and more reliable. This laid the foundation for further developments, paving the way for the third generation that used integrated circuits.

Here are two of the main advantages of first-generation:

 They provided better speed and improved accuracy.

  • Computers developed in this era were smaller, more reliable, and capable of using less power.

Here are two of the main disadvantages of first-generation:

  They were only used for specific objectives and required frequent maintenance.

  The second generation of computer used punch cards for input, which required frequent maintenance.

Third Generation of Computers

The third generation of computers emerged between 1964 and 1971. This generation used microchips or integrated circuits, making it possible to create smaller, cheaper, and much faster computers.

The third generation of computers was much faster than previous generations, with computational times reduced from microseconds to nanoseconds. New input devices like the mouse and keyboard were introduced, replacing older methods like punch cards. New functionalities, like multiprogramming and time-sharing, and remote processing, were introduced, allowing for more efficient use of computer resources.

Here are two of the main advantages of first generation:

  The use of integrated circuits made them more reliable.

       Smaller in size and required less space than previous generations.

Here are two of the main disadvantages of first generation:

  Advanced technology was needed to manufacture IC chips.

  Formal training was necessary to operate third-gen computers.

Fourth Generation of Computers

Fourth generation computers were developed in 1972 after third generation that used microprocessors. They used Very Large Scale Integrated (VLSI) circuits, which contained about 5000 transistors capable of performing complex activities and computations.

Fourth generation computers were more adaptable, had more primary storage capacity, were faster and more reliable than previous generations, and were also portable, small, and required less electricity. Intel was the first company to develop a microprocessor used in fourth generation computer.

Fourth generation computers used LSI chip technology and were incredibly powerful but also very small, leading to a societal revolution in the computer industry. This generation had the first supercomputers, used complex programming languages like C, C++, DBASE, etc., and could perform many accurate calculations.

Here are two of the main advantages of first generation:

  Fourth generation computers were smaller and more dependable.

  GUI (Graphics User Interface) technology was used in this generation to provide users with better comfort.

Here are two of the main disadvantages of first generation:

 They use complex VLSI Chips, and VLSI Chip manufacturing requires advanced technology.

 To build these computers, Integrated Circuits (ICs) were required, and to develop those, cutting-edge technology was needed.

Fifth Generation of Computers

The fifth generation of computers emerged after the fourth generation and is still being developed. Computers of fifth generation use artificial intelligence (AI) to perform various tasks. These computers use programming languages such as Python, R, C#, Java, etc., as input methods.

The fifth generation computers employ ULSI technology (Ultra Large Scale Integration), parallel processing, and AI to perform scientific computations and develop AI software. They can perform intricate tasks such as image recognition, human speech interpretation, natural language understanding, etc. Examples of fifth generation include laptops, desktops, notebooks, chromebooks, etc.

Here are two of the main advantages of first generation:

 These computers are lightweight and easy to move around.

  They are easier to repair and parallel processing technology has improved in these computers.

Here are two of the main disadvantages of first generation:

Using it for spying on people.

  Fear of unemployment due to AI replacing jobs.

Overview of Computer Organization

Computer organization refers to the way a computer's hardware components are arranged and interact to execute instructions. It encompasses the design and structure of the computer's internal architecture, including the central processing unit (CPU), memory, input/output devices, and the system bus. The primary goal of computer organization is to create an efficient, reliable, and scalable system that can execute a wide range of tasks.

At its core, computer organization is concerned with the organization and interconnection of the hardware components to ensure effective communication and coordination. The key components include:

1.Central Processing Unit (CPU):Often regarded as the brain of the computer, the CPU is responsible for executing instructions stored in memory. It comprises the arithmetic logic unit (ALU) for performing calculations, the control unit for managing the execution of instructions, and registers for temporary data storage.

2.Memory:Memory is where data and program instructions are stored for quick access by the CPU. Computer systems typically have two types of memory - volatile RAM (Random Access Memory) for temporary data storage and non-volatile storage (like hard drives or SSDs) for long-term data storage.

3.Input/Output (I/O) Devices:These include peripherals like keyboards, mice, displays, and external storage devices. I/O devices facilitate communication between the computer and the external world.

4.System Bus:The system bus is a communication pathway that connects the CPU, memory, and I/O devices, allowing them to exchange data and instructions.

5.Storage: In addition to RAM, computers have long-term storage devices like hard drives or SSDs, where data and applications are stored even when the power is turned off.

6.Cache Memory: Cache memory is a small, high-speed type of volatile computer memory that provides high-speed data access to a processor and stores frequently used computer programs, applications, and data.

Computer organization also considers factors such as data representation, instruction set architecture, and addressing modes. Different types of computer organizations, such as von Neumann architecture and Harvard architecture, offer varying approaches to organizing memory and processing units.

In summary, computer organization is a crucial aspect of computer architecture that focuses on the structure and design of a computer system's components, ensuring efficient communication and execution of instructions. The field continues to evolve with advancements in technology, leading to the development of faster, more reliable, and energy-efficient computing systems.

Memory Hierarchy and cache

In the Computer System Design, Memory Hierarchy is an enhancement to organize the memory such that it can minimize the access time. The Memory Hierarchy was developed based on a program behavior known as locality of references. The figure below clearly demonstrates the different levels of the memory hierarchy.

Why Memory Hierarchy is Required in the System?

Memory Hierarchy is one of the most required things in computer memory as it helps in optimizing the memory available in the computer. There are multiple levels present in the memory, each one having a different size, different cost, etc. Some types of memory like cache, and main memory are faster as compared to other types of memory but they are having a little less size and are also costly whereas some memory has a little higher storage value, but they are a little slower. Accessing of data is not similar in all types of memory, some have faster access whereas some have slower access. Types of Memory Hierarchy 


This Memory Hierarchy Design is divided into 2 main types:

External Memory or seconday Memory:

Comprising of magnetic Disk,optical Disk,and Magnetic tape i.e peripheral storage devices which are accessible by the processor via an I/O module.

  • Internal Memory or primary Memory:

Comprising of main memory,cache memory and CPU registers This is directly accessible by the processor.

1. Registers

Registers are small, high-speed memory units located in the CPU. They are used to store the most frequently used data and instructions. Registers have the fastest access time and the smallest storage capacity, typically ranging from 16 to 64 bits.

2. Cache Memory

Cache memory is a small, fast memory unit located close to the CPU. It stores frequently used data and instructions that have been recently accessed from the main memory. Cache memory is designed to minimize the time it takes to access data by providing the CPU with quick access to frequently used data.

3. Main Memory

Main memory also known as RAM (Random Access Memory), is the primary memory of a computer system. It has a larger storage capacity than cache memory, but it is slower. Main memory is used to store data and instructions that are currently in use by the CPU.

Types of Main Memory

Static RAM:Static RAMstores the binary information in flip flops and information remains valid until power is supplied. It has a faster access time and is used in implementing cache memory.

Dynamic RAM: It stores the binary information as a charge on the capacitor. It requires refreshing circuitry to maintain the charge on the capacitors after a few milliseconds. It contains more memory cells per unit area as compared to SRAM.

4. Secondary Storage

Secondary storage, such as hard disk drive (HDD) and solid-state drives (SSD) is a non-volatile memory unit that has a larger storage capacity than main memory. It is used to store data and instructions that are not currently in use by the CPU. Secondary storage has the slowest access time and is typically the least expensive type of memory in the memory hierarchy.

5.Magnetic Disk

Magnetic disk are simply circular plates that are fabricated with either a metal or a plastic or a magnetized material. The Magnetic disks work at a high speed inside the computer and these are frequently used.

6. Magnetic Tape

Magnetic tape is simply a magnetic recording device that is covered with a plastic film. It is generally used for the backup of data. In the case of a magnetic tape, the access time for a computer is a little slower and therefore, it requires some amount of time for accessing the strip.

 

Instruction codes

Instruction codes are bits that instruct the computer to execute a specific operation. An instruction comprises groups called fields. These fields include:

An instruction comprises groups called fields. These fields include:

The Operation code (Opcode) field determines the process that needs to perform.
The Address field contains the operand's location, i.e., register or memory location. 
The Mode field specifies how the operand locates.

HTML Table Example

Mode Opcode Address of Operand

Structure of an Instruction Code

The instruction code is also known as an instruction set. It is a collection of binary codes. It represents the operations that a computer processor can perform. The structure of an instruction code can vary. It depends on the architecture of the processor but generally consists of the following parts:

Opcode:The opcode (Operation code) represents the operation that the processor must perform. It might indicate that the instruction is an arithmetic operation such as addition, subtraction, multiplication, or division.

 Operand(s):The operand(s) represents the data that the operation must be performed on. This data can take various forms, depending on the processor's architecture. It might be a register containing a value, a memory address pointing to a location in memory where the data is stored, or a constant value embedded within the instruction itself.

Addressing mode:The addressing mode represents how the operand(s) can be interpreted. It might indicate that the operand is a direct address in memory, an indirect address (i.e. a memory address stored in a register), or an immediate value (i.e. a constant embedded within the instruction).

• Prefixes or modifiers:Some instruction sets may include additional prefixes or modifiers that modify the behavior of the instruction. For example, they may specify that the operation should be performed only if a specific condition is met or that the instruction should be executed repeatedly until a specific condition is met.

Types of Instruction Code

There are various types of instruction codes. They are classified based on the number of operands, the type of operation performed, and the addressing modes used. The following are some common types of instruction codes:

1.One-operand instructions:

These instructions have one operand and perform an operation on that operand. For example, the "neg" instruction in the x86 assembly language negates the value of a single operand.

2.Two-operand instructions:

These instructions have two operands and perform an operation involving both. For example, the "add" instruction in x86 assembly language adds two operands together.

3.Three-operand instructions:

These instructions have three operands and perform an operation that involves all three operands. For example, the "fma" (fused multiply-add) instruction in some processors multiplies two operands together, adds a third operand, and stores the result in a fourth operand.

4.Data transfer instructions:

These instructions move data between memory and registers or between registers. For example, the "mov" instruction in the x86 assembly language moves data from one location to another.

5.Control transfer instructions:

These instructions change the flow of program execution by modifying the program counter. For example, the "jmp" instruction in the x86 assembly language jumps to a different location in the program.

6.Arithmetic instructions:

These instructions perform mathematical operations on operands. For example, the "add" instruction in x86 assembly language adds two operands together.

7.Logical instructions:

These instructions perform logical operations on operands. For example, the "and" instruction in x86 assembly language performs a bitwise AND operation on two operands.

8.Comparison instructions:

These instructions compare two operands and set a flag based on the result. For example, the "cmp" instruction in x86 assembly language compares two operands and sets a flag indicating whether they are equal, greater than, or less than.

9.Floating-point instructions:

These instructions perform arithmetic and other operations on floating-point numbers. For example, the "fadd" instruction in the x86 assembly language adds two floating-point numbers together.

Stored Program Organization

SPO is a fundamental concept in computer architecture that revolutionized the way computers operate. In a system following the Stored Program Organization, both data and instructions are stored in the computer's memory in the same format. This means that a computer can manipulate its own program instructions just like any other data, enabling a high degree of flexibility and programmability. The cornerstone of SPO is the von Neumann architecture, named after mathematician and computer scientist John von Neumann, who formalized the idea. In this architecture, the CPU fetches instructions and data from the same memory, allowing for seamless interaction between the two. This design principle has become the standard for modern computers, fostering the development of versatile and programmable machines that can execute a wide range of applications by simply changing the stored program in memory. The Stored Program Organization has played a pivotal role in the evolution of computing systems, enabling the development of powerful and adaptable computers that have become integral to various aspects of modern life.


Common bus system


A basic computer has 8 registers, memory unit and a control unit. The diagram of the common bus system is as shown below.

 

Connections:


The outputs of all the registers except the OUTR (output register) are connected to the common bus. The output selected depends upon the binary value of variables S2, S1 and S0. The lines from common bus are connected to the inputs of the registers and memory. A register receives the information from the bus when its LD (load) input is activated while in case of memory the Write input must be enabled to receive the information. The contents of memory are placed onto the bus when its Read input is activated.

 
Various Registers:


4 registers DR, AC, IR and TR have 16 bits and 2 registers AR and PC have 12 bits. The INPR and OUTR have 8 bits each. The INPR receives character from input device and delivers it to the AC while the OUTR receives character from AC and transfers it to the output device. 5 registers have 3 control inputs LD (load), INR (increment) and CLR (clear). These types of registers are similar to a binary counter.

Abbreviation Register Name
OUTR Output register
TR Temporary
IR Instruction
INPR Input
AC Accumulator
DR Data
PC Program counter
AR Address


Adder and logic circuit:

 

The adder and logic circuit provides the 16 inputs of AC. This circuit has 3 sets of inputs. One set comes from the outputs of AC which implements register micro operations. The other set comes from the DR (data register) which are used to perform arithmetic and logic micro operations. The result of these operations is sent to AC while the end around carry is stored in E as shown in diagram. The third set of inputs is from INPR.

 

Note:

The content of any register can be placed on the common bus and an operation can be performed in the adder and logic circuit during the same clock cycle.


Instruction set

An instruction set is a group of commands for a central processing unit (CPU) in machine language. The term can refer to all possible instructions for a CPU or a subset of instructions to enhance its performance in certain situations. All CPUs have instruction sets that enable commands directing the CPU to switch the relevant transistors. The instructions tell the CPU to perform tasks. Some instructions are simple read, write and move commands that direct data to different hardware elements. The instructions are made up of a specific number of bits. For instance, The CPU's instructions might be 8 bits, where the first 4 bits make up the operation code that tells the computer what to do. The next 4 bits are the operand, which tells the computer the data that should be used. The length of an instruction set can vary from as few as 4 bits to many hundreds. Different instructions in some instruction set architectures (ISAs) have different lengths. Other ISAs have fixed-length instructions.

The following are three main ways instruction set commands are used:

1. Data handling and memory management:Instruction set commands are used when setting a register to a specific value, copying data from memory to a register or vice versa, and reading and writing data.

2.Arithmetic and logic operations and activities: These commands include add, subtract, multiply, divide and compare, which examines values in two registers to see if one is greater or less than the other.

3.Control flow activities: One example is branch, which instructs the system to go to another location and execute commands there. Another is jump, which moves to a specific memory location or address.

Instruction types

A computer instruction refers to a binary code that controls how a computer performs micro-operations in a series. They, together with the information, are saved in the memory. Every computer has its own set of instructions. Operation codes or Opcodes and Addresses are the two elements that they are divided into.

A computer’s instructions can be any length and have any number of addresses. The arrangement of a computer’s registers determines the different address fields in the instruction format. The instruction can be classified as three, two, and one address instruction or zero address instruction, depending on the number of address fields.

Three Address Instructions

A three-address instruction has the following general format:

source 1 operation, source 2 operation, source 3 operation, destination

ADD X, Y, Z

Here, X, Y, and Z seem to be the three variables that are each assigned to a distinct memory location. The operation implemented on operands is ‘ADD.’ The source operands are ‘X’ and ‘Y,’ while the destination operand is ‘Z.’

In order to determine the three operands, bits are required. To determine one operand, n bits are required (one memory address). In the same way, 3n bits are required to define three operands (or three memory addresses). To identify the ADD operation, bits are also required.

Two Address Instructions

A Two-address,instruction has the following general formate:Source and destination of the operation.

ADD X, Y

Here X and Y are the two variables that have been assigned to a specific memory address. The operation performed on the operands is ‘ADD.’ This command combines the contents of variables X and Y and stores the result in variable Y. The source operand is ‘A,’ while ‘B’ is used as both a source and a destination operand.

The two operands must be determined using bits. To define one operand, n bits are required (one memory address). To determine two operands, 2n bits are required (two memory addresses). The ADD operation also necessitates the use of bits.

One Address Instructions

One address instruction has the following general format:

operation source

INCLUDE X

Here X refers to the variable that has access to a specific memory region. The operation performed on operand A is ‘ADD.’ This instruction adds the value of variable A to the accumulator and then saves the result inside the accumulator by restoring the accumulator’s contents.

Zero Address Instructions

In zero address instructions, the positions of the operands are implicitly represented. These instructions use a structure called a pushdown stack to hold operands. 

 

Post a Comment

0 Comments