Ãëàâíàÿ    Ex Libris    Êíèãè    Æóðíàëû    Ñòàòüè    Ñåðèè    Êàòàëîã    Wanted    Çàãðóçêà    ÕóäËèò    Ñïðàâêà    Ïîèñê ïî èíäåêñàì    Ïîèñê    Ôîðóì   
blank
Àâòîðèçàöèÿ

       
blank
Ïîèñê ïî óêàçàòåëÿì

blank
blank
blank
Êðàñîòà
blank
Milutinovic V. — Computer Architecture: Concepts and Systems
Milutinovic V. — Computer Architecture: Concepts and Systems



Îáñóäèòå êíèãó íà íàó÷íîì ôîðóìå



Íàøëè îïå÷àòêó?
Âûäåëèòå åå ìûøêîé è íàæìèòå Ctrl+Enter


Íàçâàíèå: Computer Architecture: Concepts and Systems

Àâòîð: Milutinovic V.

Àííîòàöèÿ:

Computation tasks have become more demanding of computers, requiring greater processing speed and larger data-handling capacity than ever before. This text approaches these tasks from an R&D perspective, and presents solutions that focus on improvements in the system domain. Contributed to by 20 recognized experts from the nations's leading universities and research institutions in the field of computer architecture, this book combines the latest information on uniprocessor and multiprocessor design based on their mode of use: numeric, functional, or symbolic. The authors' knowledge and experience are brought together to create a unique view of 12 carefully selected topics, including: VLSI processor architecture; RISC types; design trade-offs for processors based on both silicon and GaAs technologies; addressing solutions for increasing machine performance; concepts and systems of numeric and symbolic processing; and new advances in supercomputers, and architectural support for artificial intelligence. The book clearly presents concepts and techniques using simple but effective examples and representative case studies.


ßçûê: en

Ðóáðèêà: Computer science/

Ñòàòóñ ïðåäìåòíîãî óêàçàòåëÿ: Ãîòîâ óêàçàòåëü ñ íîìåðàìè ñòðàíèö

ed2k: ed2k stats

Ãîä èçäàíèÿ: 1988

Êîëè÷åñòâî ñòðàíèö: 575

Äîáàâëåíà â êàòàëîã: 22.02.2015

Îïåðàöèè: Ïîëîæèòü íà ïîëêó | Ñêîïèðîâàòü ññûëêó äëÿ ôîðóìà | Ñêîïèðîâàòü ID
blank
Ïðåäìåòíûé óêàçàòåëü
Reduced instruction set computers (RISC), processors available      9
Reduced instruction set computers (RISC), silicon area freed up in      8
Reduction language      405
Reduction mechanism, definition      343
Redundancy      510 526
Redundancy, hybrid      518
Redundancy, masking      514—516
Redundancy, N-modular      515
Redundancy, reconfiguration      516—519
Redundancy, triple modular      514—516
Reference parameters, passing      65
Reference trees, storage management in FP machines      428
Referential transparency, definition      407
Register cell design, gallium arsenide      99—101
Register file      77
Register file design, gallium arsenide implementation      98—103
Register file optimizations, gallium arsenide processors      121—123
Register file partitioning, gallium arsenide      101
Register windows      63—67
Register windows, advantages      64
Register windows, complications      63—64
Register windows, role of compilers      65
Register windows, software issues      64
Reliability, classification      502
Reliability, definition      502
Remps vector multiprocessor      327—330
Remps vector multiprocessor, configuration      327 329
Remps vector multiprocessor, dynamic pipeline net in      329
Remps vector multiprocessor, purpose of      329—330
Remps vector multiprocessor, simulating Cray X-MP/4      330 334f
Remps vector multiprocessor, simulating HEP      330 334f
Remps vector multiprocessor, system architecture      328f
Reorganizer, definition      70
Replacement algorithms      112
Request generator, definition      266
Resource allocation, local computer systems      265—304
Resource scheduling      270—273
Resource scheduling schemes, examples      275
Resource scheduling, design methodology      276—277
Resource scheduling, Ethernet      276—277
Resource scheduling, single contention-bus network      276
Resource sharing, advantages of      267—268
Resource sharing, dataflow supercomputers      270
Resource sharing, dynamic task migration in      268—270
Resource sharing, generic model of      266—270
Resource sharing, load balancing in      268
Resource sharing, VLSI-systolic array multiprocessors in      268 269f
Resource, definition      265—266
Resource-allocation schemes      273—281
Resource-allocation schemes, design, single-contention-bus network      299t
Resource-allocation schemes, state-dependent      274
Resource-allocation schemes, state-independent      274
Resource-allocation schemes, taxonomy of      274—275
RETE algorithm      340—341
Retry techniques, error recovery      520
Ridge      32 18—20
Ridge, compare and branch instruction in      19
Ridge, cycle time      20
Ridge, instruction format      19f
Ridge, instruction set in      19
Ridge, no condition codes in      19
Ridge, Schlottky bipolar logic implementation      20
Ring network, FP multiprocessors      417
rollbacks      502 512
Routing networks, FP machines      419—420 420f
RP3 multiprocessor, parallel-serial control      142—143
Rumbaugh's dataflow machine, shared bus interconnection in      417
Scheduling in HEP machine      156
Scheduling in NYU Ultracomputer      158
Scheduling schemes, resource-allocation      274—275
Scheduling, levels of      155
Scheduling, multiprocessor      155—158
Scheduling, nature of      156
Scheduling-and-synchronization overhead, in multiprocessor architecture      149—151
Schema methods, definition      415
Schwartz and Barnwell method of transformation      480—481
SDFL      87
SDFL logic circuits, D-MESFETs in      87—88
Semantic gap, definition      52
Semaphores, MIMD processing      188
Semisystolic array, definition      457
Sequencing hazard interlocks, delayed branching      117
Sequencing hazard interlocks, silicon CISC, RISC      116—117
Sequoia System      541—542
Sequoia System, architecture      541f
Sequoia System, fault recovery in      541
Serial model of computation      139
Serial multiplier, gallium arsenide implementation      105
Shared bus interconnection, FP multiprocessors      417
Shared memory tree, definition      213
Shared-variable synchronization      161
Shifter      77
Shorts, definition      509—510
Shuffle-exchange network      236—238
Shuffle-exchange network, definition      236
Shuffle-exchange network, MIMD mode      238
Shuffle-exchange network, SIMD mode      237
Sift      545—546
SIFT, example      546f
Silicon caches, placement policies in      111—112
Silicon, CISC vs. RISC implementation of multiplier/divider design      105
Silicon, compared with gallium arsenide      84—85 91—92
Silicon, hierarchical memory systems      111—112
Silicon, instruction pipeline      97f
SIMD algorithms      191—193
SIMD machine architecture      180—182
SIMD processing, associative processor      193—196
SIMD processing, languages for      188—190
SIMD, parallel mode      226
SIMD/MIMD algorithms      196—198
Simple operation model of computation      138
Single contention-bus network, resource scheduling in      276—277
Single-assignment languages, definition      407
Single-chip processor design, gallium arsenide      96
SISD machine architecture      180
Smooth module, computation with arrays      367—371
Smooth module, dataflow machine code for      370—371f
Smooth module, description tree for      379f
SNAP, architecture, applications      331t
Software recovery      529
Space Shuttle computer system, sanity check in      545
Space Shuttle computer system, software voting in      545
Stack architectures      25
Stanford MIPS      15—18
Stanford MIPS microprocessor      67—72
Stanford MIPS microprocessor, assembly instructions      68—69
Stanford MIPS microprocessor, compiler technology      67
Stanford MIPS microprocessor, instruction formats      72f
Stanford MIPS microprocessor, instruction set architecture      67—72
Stanford MIPS microprocessor, pipeline      70 71f
Stanford MIPS microprocessor, reorganizer      70
Stanford MIPS microprocessor, software instruction scheduling      71—72
Stanford MIPS microprocessor, word addressing      67
Stanford MIPS, architectural solutions in      18
Stanford MIPS, compact instruction formats in      107—108
Stanford MIPS, compare and branch instruction in      18
Stanford MIPS, cycle time of      18
Stanford MIPS, execution in      16
Stanford MIPS, pipelining in      16—18
Stanford MIPS, reorganizer in      16—17
Stanford MIPS, timing hazard interlocks in      119—120
Star      531—532
STAR, architecture      531
STAR, configuration      532
STAR, operation      532
STARAN associative processor      208—210
STARAN associative processor, components of      210
STARAN associative processor, memory      208—209
STARAN associative processor, processing speed of      210
STARAN associative processor, structure      208 209f
STARAN, SIMD architecture      180
Static dataflow architecture      360—366
Static dataflow model      357
Static redundancy      see "Masking redundancy"
Status FT 200, logical processors in      536—537
STL      87
Storage management, FP machines      427—428
Storage-reclamation scheme, G-machine      437
Stratus FT      200 536—537
Stratus Link      536
Supercomputers      307—353
Supercomputers, applications      313—315
Supercomputers, architectural classes of      309
Supercomputers, Cray      317—320
Supercomputers, efficiency of      309—310
Supercomputers, future perspectives      344—347
Supercomputers, multiprocessing      315—330
Supercomputers, predictions for      356—357
Supercomputers, speed performance of      310t
Supercomputers, value of      308
Supercomputing, numerical      309—311
Symbol Programming Language, Fairchild/ISU Symbol      39
Symbolic processing vs. numeric processing      309—313
Symbolic processing, computers for      311—313
Symbolic processing, requirements for      310—311
Symbolics 3600      345
Synapse N + 1      537—538
Synapse N + 1, checkpointing in      537—538
Synapse N + 1, fault detection in      538
Synchronization, control graphs      159
Synchronization, control-level      159—160
Synchronization, data-level      160—166
Synchronization, multiprocessor      159—166
System reliability      see also "Fault-tolerant computers"
System reliability, fault-avoidance      501—502
System reliability, fault-tolerance      501—502
System reliability, methods of increasing      497
System reliability, reconfiguration increases      516—519
Systolic array, definition      454
Systolic arrays, algorithmically specified, design approaches      454—494
Systolic arrays, applications      455t
Systolic arrays, classification      457
Systolic arrays, transformational design approach      458—486
Systolic processing, definition      457
Systolic processing, design problems      457
Systolic processor      456f
Tagged token architecture      360—361
Tagged token machine, comparison with other multiprocessors      172t
Tandem      532—534
Tandem Systems, fault-tolerance in      534—535
Tandem Systems, system building blocks in      534
Tandem, architecture      533f
Tandem, data integrity in      532
Test&set      161
Texas reconfigurable array computer      211—213
Texas reconfigurable array computer, configuration      211
Texas reconfigurable array computer, data trees in      213
Texas reconfigurable array computer, instruction trees in      213
Texas reconfigurable array computer, memory capacity      213
Texas reconfigurable array computer, network operating modes      212—213
Texas reconfigurable array computer, network, SW-banyan network      212
Texas reconfigurable array computer, secondary memory in      213
THRISTLE, CMU, architecture, applications      332t
TI Explorer      345
TI-dataflow machine, ring network in      417
Time multiplexing      169—170
Time-series analysis, ARMA      295
Timed hazard interlocks, examples      119
Tolerant Systems      534—535
Transformation, functional program      415
Transformational systems, methodologies      460—461
Transformational systems, Y-chart for      459f
Translation look-aside buffer, NS32032      31
Transputer, compact instruction formats in      108
Tree structure, FP machines      417—418
Tricon-1, error detection in      548
Tricon-1, fault detection in      547
Tricon-1, high availability goal      547
Triple modular redundancy      514—515
UC Berkeley RISC      13—15 57—67
UC Berkeley RISC, compiler      15
UC Berkeley RISC, cycle time      15
UC Berkeley RISC, hardware features of      14—15
UC Berkeley RISC, history of      13
UC Berkeley RISC, implementation of RISC architecture      59 62
UC Berkeley RISC, instruction formats      62f
UC Berkeley RISC, large register file in      15
UC Berkeley RISC, pipeline      62—63
UC Berkeley RISC, register windows      57—59
UC Berkeley RISC, register windows in      14—15
UC Berkeley RISC-II, register cell design      101f
Uniprocessor approach, mapping process      429—430
Uniprocessor, array structure      524f
Uniprocessor, vector structure      523f
Uniprocessors      530—531
Uniprocessors, fault masking in      544
Uniprocessors, fault tolerance in      530—531
Uniprocessors, masking redundancy in      543—545
University of Reading RIMMS      22—24
University of Reading RIMMS, architecture      22—23
University of Reading RIMMS, microcomputer elements in      23—24
University of Reading RIMMS, multi-microprocessor      22
University of Reading RIMMS, registers      22
Up-level referencing      65
Val      see also "Functional programming languages"
Val compiler      374—377
Val compiler, analysis of program structure      377—381
Val compiler, organization      375
Val compiler, requirements for      374—375
VAX, SISD architecture      180
VAX-11, caching scheme      168
VAX-11, serial, single-level control      142
Vector machines      136 see "Cyber-205" "Fujitsu
Vector machines vs. multiprocessors      137
Vector machines, program slicing      138f
Virtual memory      110
VLSI technology      345
VLSI-systolic array multiprocessors, resource sharing systems      268 269f
von Neumann model of computation      138—139
Watchdog timers, fault/error detection      514
Window control, approximate greedy      293—294
Window control, binary-divide      285—288
Window control, dynamic-programming      288—292
Window control, dynamic-programming, lookup-table method for      296—298 297f
Window control, Ethernet interfaces      296—298
Window control, load estimation for      294—295
Window control, optimal-greedy      292—293
Window-control algorithms      285—298
Window-search scheme      283—284
Write-through technique      112
ZAPP      443—444
ZAPP system, determining parallelism in      431
ZAPP system, mapping approach in      430
ZAPP, controlling parallelism in      444
ZAPP, operation mode      443
ZAPP, program execution      444
Zilog Z80000      31—33
Zilog Z80000, address space      32
Zilog Z80000, flexible on-chip cache      32
Zilog Z80000, performance      33
Zilog Z80000, pipelining      32—33
Zilog Z80000, register file      32
ZMOB, architecture, applications      332t
ZMOB, ring network in      417
1 2 3 4
blank
Ðåêëàìà
blank
blank
HR
@Mail.ru
       © Ýëåêòðîííàÿ áèáëèîòåêà ïîïå÷èòåëüñêîãî ñîâåòà ìåõìàòà ÌÃÓ, 2004-2024
Ýëåêòðîííàÿ áèáëèîòåêà ìåõìàòà ÌÃÓ | Valid HTML 4.01! | Valid CSS! Î ïðîåêòå