Computer Architecture - Sections 1.1-1.5
Computer Architecture - Sections 1.1-1.5 CS 3340
Popular in Computer Architecture
verified elite notetaker
Popular in Computer Science and Engineering
This 5 page Class Notes was uploaded by Aaron Maynard on Tuesday August 30, 2016. The Class Notes belongs to CS 3340 at University of Texas at Dallas taught by in Fall 2016. Since its upload, it has received 70 views. For similar materials see Computer Architecture in Computer Science and Engineering at University of Texas at Dallas.
Reviews for Computer Architecture - Sections 1.1-1.5
Report this Material
What is Karma?
Karma is the currency of StudySoup.
Date Created: 08/30/16
COMPUTERARCHITECTURE FALLSEMESTER2016 INSTRUCTOR:DR.KARENMAZIDI email@example.com 22 August 2016 - Chapter 1 These sets of notes will be covering the subjects covered in CS 3340.003 (and other). This packet will cover topics within Chapter 1 of Computer Organization and Design, Fifth Edition: The Hardware/Software Interface by Patterson and Hennessay. Any material on these pages include but are not limited to presentational slides provided by the professor. Computer Abstractions and Technology What do people mean when they talk about abstraction? According to the interwebs (ie: Wikipedia), an abstraction is a technique for arranging complexity of computer systems. Dr. Mazidi gives three definitions in here class: ● Seeing the big picture ● Pushing details down to a lower level concealing the details ● Abstraction helps us manage complex systems Take with that what you will. 1 Big Picture of Changing Technology The dramatic changes in what is being called “The Computer Revolution” can be considered in alignment with Moore’s Law. In 1965 the co-founder of Intel, Gordon Moore, made an observation that the number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented. This prediction held true for over 50 years. Computer Architecture is a way of saying the organization of a computer. This desired architecture is how a system is designed to achieve the desired functionality and sometimes as well as in the most efficient way possible. Different types or classes of computers have different architectures because they have different functionalities or purposes to fulfil. There are four major classes of computers: Personal, Server, Embedded and Supercomputers. ● Personal Computers ○ General purpose ○ Runs a variety of software ○ Subject to cost/performance tradeoff ● Server Computers ○ Network based ○ High capacity, performance and reliability ○ Fast I/O ○ Ranged sizes, from credit card to buildings ● Supercomputers ○ Runs high-end scientific engineering calculations ○ Highest capability yet represent only a small fraction of market ● Embedded Computers ○ Make up of 95% of microprocessor sales ○ Hidden as components of systems ○ Stringent power/performance/cost restraints 2 The PostPC Era In the newest era of technology, we have the Personal Mobile Device, or PMD. These devices are battery operated, connect to the internet, and can perform a multitude of tasks. These devices include but are not limited to smartphones, tablets and electronic glasses. What makes these devices so powerful is the newfound capabilities of cloud computing. Cloud computing can be made available through warehouse scale computers (WSC) and can be provided as Software as a Service (SaaS). Portions of the software runs on a personal mobile device, while other portions run in the cloud based server. This allows the devices to utilize memory more efficiently. The top companies who work on cloud computing are Amazon and Google. Processes There are four things that processes depend on to function correctly and efficiently. ● Algorithm ○ Determines number of operations executed ● Programming language, compiler, architecture ○ Determine number of machine instructions executed per operation ● Processor and memory system ○ Determine how fast instructions are executed ● I/O system (including OS) ○ Determines how fast I/O operations are executed These processes can be improved through what is called the 8 great ideas. 1. Design for M oore’s Law 2. Use a bstraction to simplify design 3. Make the c ommon case fast 4. Performance v ia parallelism 5. Performance via pipelining 6. Performance via prediction 7. Hierarchy of memories 8. Dependability via redundancy 3 Your Program, and What it Means There are three parts that are required to develop a program. 1. Application software a. High-level language i. Level of abstraction closer to problem domain ii. Provides for productivity and portability b. Assembly language i. Textual representation of instructions c. Hardware representation i. Binary digits (bits) ii. Encoded instructions and data 2. System software a. Compiler: translates HLL code to machine code b. Operating System: service code i. Handling input/output ii. Managing memory and storage iii. Scheduling tasks & sharing resources 3. Hardware a. Processor, memory, I/O controllers There are several advantages of high-level languages. They can be easier to translate from pseudocode to code, and improve programmer productivity. It can also allow languages to be tailored to specific uses as well as increase their portability. However some applications need to be in an assembly language for efficiency. Programming in assembly can make you a better programmer because it allows you to understand what is going on “under the hood” and will help you write code more efficiently. Processors (CPU) The processor or central processing unit (CPU) is the electronic circuitry within a computer that carries out the instructions of a computer program by performing the basic operations specified by the instructions. The datapath performs on the data, which is sequenced by the control. The control also sequences memory such as the cache memory, a small fast SRAM memory for immediate access to data. 4 There are two types of memory to think about: volatile main memory and nonvolatile secondary memory. The differences between the two are pretty simple to understand. Volatile main memory loses the instructions and data when the power to the computer turns off or is removed. Nonvolatile secondary memory stores its data on either a magnetic disk, flash memory or an optical disk. A magnetic disk would be a hard drive, flash memory a USB, and an optical disk a CDROM or DVD. Networks A network is defined as a group of two or more computer systems linked together. There are many types of computer networks, including the following: local-area networks (LANs): The computers are geographically close together (that is, in the same building). Networks can share communications, resources and give nonlocal access. Wide-area networks (WAN) is when we see as the internet today, and we access them usually through WiFi or Bluetooth. 5