Ãëàâíàÿ    Ex Libris    Êíèãè    Æóðíàëû    Ñòàòüè    Ñåðèè    Êàòàëîã    Wanted    Çàãðóçêà    ÕóäËèò    Ñïðàâêà    Ïîèñê ïî èíäåêñàì    Ïîèñê    Ôîðóì   
blank
Àâòîðèçàöèÿ

       
blank
Ïîèñê ïî óêàçàòåëÿì

blank
blank
blank
Êðàñîòà
blank
Milutinovic V. — Computer Architecture: Concepts and Systems
Milutinovic V. — Computer Architecture: Concepts and Systems



Îáñóäèòå êíèãó íà íàó÷íîì ôîðóìå



Íàøëè îïå÷àòêó?
Âûäåëèòå åå ìûøêîé è íàæìèòå Ctrl+Enter


Íàçâàíèå: Computer Architecture: Concepts and Systems

Àâòîð: Milutinovic V.

Àííîòàöèÿ:

Computation tasks have become more demanding of computers, requiring greater processing speed and larger data-handling capacity than ever before. This text approaches these tasks from an R&D perspective, and presents solutions that focus on improvements in the system domain. Contributed to by 20 recognized experts from the nations's leading universities and research institutions in the field of computer architecture, this book combines the latest information on uniprocessor and multiprocessor design based on their mode of use: numeric, functional, or symbolic. The authors' knowledge and experience are brought together to create a unique view of 12 carefully selected topics, including: VLSI processor architecture; RISC types; design trade-offs for processors based on both silicon and GaAs technologies; addressing solutions for increasing machine performance; concepts and systems of numeric and symbolic processing; and new advances in supercomputers, and architectural support for artificial intelligence. The book clearly presents concepts and techniques using simple but effective examples and representative case studies.


ßçûê: en

Ðóáðèêà: Computer science/

Ñòàòóñ ïðåäìåòíîãî óêàçàòåëÿ: Ãîòîâ óêàçàòåëü ñ íîìåðàìè ñòðàíèö

ed2k: ed2k stats

Ãîä èçäàíèÿ: 1988

Êîëè÷åñòâî ñòðàíèö: 575

Äîáàâëåíà â êàòàëîã: 22.02.2015

Îïåðàöèè: Ïîëîæèòü íà ïîëêó | Ñêîïèðîâàòü ññûëêó äëÿ ôîðóìà | Ñêîïèðîâàòü ID
blank
Ïðåäìåòíûé óêàçàòåëü
DEC VLSI VAX, parallelism      34
DEC VLSI VAX, performance      34
Delayed branch      59f
Delayed branch, definition      11 57
Delayed branch, examples of      57
Delayed branch, sequencing hazard interlocks      117
DELtran      40—42
DELtran, characteristics      40
DELtran, FORTRAN-II      40—41
DELtran, referencing identifiers      40
DELtran, run-time interpreter      42
Demand-driven execution, model of parallel sequencing      141
Dennis dataflow machine, configuration      440—441
Dennis dataflow machine, program execution in      441—442
Dependability, computer systems      497—499
Description tree      377
Description tree, smooth module      379f 380f
Determinancy, definition      358
Direct execution      5
Direct execution architectures      42—45
Direct execution architectures, differences from von Neumann architectures      43
Direct execution architectures, disadvantages      42—43
Direct execution architectures, lexical processor      43—44
Direct execution architectures, PASDEC project      44
Direct execution architectures, University of Maryland approach      43—44
Direct execution computer      4
Discovery methods, definition      415
Distributed memory multiprocessors, organization      360
Distributed minimum-search algorithm      281—285
Distributed scheduling      275
Duplication, fault/error detection      512—513
Dynamic memory allocation, G-machine      437
Dynamic task migration, resource sharing systems      268—270
Dynamic-programming window control      288—292
Dynamic-programming window control, lookup-table method for      296—298 297f
E-JFETs      86
E-MESFETs      86
E-MESFETs in DCFL logic circuits      88
ECL      87
ECN, resource scheduling in      275
Edge-guided thresholding (EGT)      197
EGT algorithm      197
Electronic Switching Systems      535—536
Electronic Switching Systems, fault-tolerance in      535—536
Electronic Switching Systems, processors components in      535
Electronic Switching Systems, staged recovery in      535
error recovery      520—521
Error recovery, checkpointing      520
Error recovery, journaling      520
Error recovery, retry techniques      520
Error recovery, techniques      520—521
Error, definition      499
Error-correcting codes      516
Error-detection codes, fault/error detection      513
Errors, fault-tolerant systems      508—510
Errors, logical      509—510
Errors, methods of occurrence      509
ETA/GF10, speed performance of      310t
ETA/GF30, speed performance of      310t
Ethernet, CSMA/CD network      276—277
Ethernet, window-control protocol      296—270
Execution tree, functional program      411f
Execution unit design, gallium arsenide      103—105
Execution, data-driven      141
Execution, demand-driven      141
Expert system, definition      332
Expert system, examples      332
Explicit programming, for parallel processing      190—198
Extra-stage cube network      244—252
Extra-stage cube network, network control      249—251
Extra-stage cube network, network structure      244—249
Extra-stage cube network, partitioning in      251—252
f Instruction pipeline design, gallium arsenide      97—98
Failure, definition      500
FAIM-1, architecture, applications      331t
Fairchild/ISU Symbol      38—40
Fairchild/ISU Symbol, features of      39—40
Fairchild/ISU Symbol, limitations      40
Fairchild/ISU Symbol, memory      39
Fairchild/ISU Symbol, Symbol Programming Language      39
Fault detection, error-correcting codes      516
Fault, definition      499
Fault-avoidance      526
Fault-avoidance, definition      497 501
Fault-avoidance, design features for      501
Fault-avoidance, methods of      501
Fault-tolerance      495—496 526
Fault-tolerance, definition      497
Fault-tolerance, processing effective errors      502
Fault-tolerance, techniques of      510—521
Fault-tolerant computer architectures      495—552
Fault-tolerant systems, critical computation      522
Fault-tolerant systems, error recovery in      520
Fault-tolerant systems, evaluation of      505—508
Fault-tolerant systems, examples      527—548
Fault-tolerant systems, high-availability      521
Fault-tolerant systems, high-performance computing      522
Fault-tolerant systems, long life      521
Fault-tolerant systems, masking redundancy in      514—516
Fault-tolerant systems, origins of faults      508—510
Fault-tolerant systems, postponed maintenance      521—522
Fault-tolerant systems, reconfiguration in      516—519
Fault-tolerant systems, structures      522—527
Fault-tolerant systems, types      521—522
Fault/error detection, techniques      512—514
Fetch&add      162
Fetch-and-add instruction, NYU ultracomputer      201
Fetch-and-phi operation, NYU Ultracomputer      201—202
FETs, logic families using      87
FIFO method      112
Fifth generation computer, definition      343 see
Flip network      208
FORK and JOIN instructions, MIMD processing      186—187
FORK/JOIN concept, RIMS microcomputer      23—24
FORTRAN for parallel processing      185
FORTRAN, use in dataflow computation      355
FTMP      546—547
FTMP, structure      546f
Fujitsu XP-200, speed performance of      310t
Functional language      405
Functional language, characterization      406—408
Functional language, examples      407
Functional programming languages, advantages of      409—410
Functional programming languages, architectures that support      405—453
Functional programming languages, caching in      433—434
Functional programming languages, compile-time techniques to improve efficiency      414—415
Functional programming languages, concurrent execution      412—413
Functional programming languages, data-driven evaluation      423—424
Functional programming languages, debugging      411—412
Functional programming languages, demand-driven evaluation      422
Functional programming languages, efficiency of      412—416
Functional programming languages, execution tree of      411f
Functional programming languages, optimization of      432—434
Functional programming languages, problems with      410—411
Functional programming languages, representation of programming structures      424—428
Functional programming languages, sequential evaluation      424
Functional programming machines, copying structures in      427
Functional programming machines, data representation in      426—427
Functional programming machines, deadlock avoidance in      431—432
Functional programming machines, defining granularity of parallelism in      429
Functional programming machines, design issues      416—434
Functional programming machines, ideal      445—447
Functional programming machines, mapping processes onto hardware      429—430
Functional programming machines, method of driving computation in      421—422
Functional programming machines, multiprocessor interconnection schemes      416—424
Functional programming machines, parallelism issues in      428—432
Functional programming machines, program representation in      424—426
Functional programming machines, storage management in      427
Functional programming, referential transparency in      407
G-machine      434—437
G-machine, combinator-reduction architecture      435—436
G-machine, design of      436—437
G-machine, dynamic memory allocation in      437
G-machine, storage management in      428
G-machine, storage-reclamation scheme in      437
Gallium arsenide circuits      see also "D-MESFETs" "E-MESFETs" "E-JFETs" "HBTs" "HEMTs" "MODFETs"
Gallium arsenide computer systems, design considerations      92—125
Gallium arsenide logic circuit families, performance levels of      89 90
Gallium arsenide technology, computer design for      84—131
Gallium arsenide, adder design      103—104
Gallium arsenide, characteristics affect computer design strategy      93—95
Gallium arsenide, compared with silicon      84—85
Gallium arsenide, comparison with silicon      91—92
Gallium arsenide, compiler design issues      115—125
Gallium arsenide, device families      86—87
Gallium arsenide, execution unit design      103—105
Gallium arsenide, hardware design issues      95—115
Gallium arsenide, information transfer rate      94—95
Gallium arsenide, instruction format design      106—115
Gallium arsenide, instruction pipeline design      97—98
Gallium arsenide, logic families      87—90
Gallium arsenide, memory pipelining      114—115
Gallium arsenide, memory system design      109—115
Gallium arsenide, multiplier/divider design      104—105
Gallium arsenide, number of chips required      93—94
Gallium arsenide, register file design      98—103
Gallium arsenide, register file partitioning      101—103
Gallium arsenide, RISC design philosophy      106
Gallium arsenide, single-chip processor design      96—97
Gallium arsenide, transistor speed      93
Gannon transformation method      464—465
Generalized cube network      239 239f
Generalized cube network, network control      241—242
Generalized cube network, network structure      238—241
Generalized cube network, partitioning      242—244
Generalized cube network, SIMD mode      240
Global list approach, mapping process      429-430
Glypnir, machine-dependent language for Iliac IV      189
Guzman Machine, architecture, applications      331t
Hamming single-error-correcting codes      516
Hardware recovery      528
Hardware requirements, computer architecture      51
Hazard functions, definition      506
HBTs      86
HBTs in digital logic circuits      89
HEMTs      see "MODFETs"
HEP multiprocessor      320—321
HEP multiprocessor vs. Arvind's data flow machine      173
HEP multiprocessor, comparison with other multiprocessors      172t
HEP multiprocessor, control mechanisms in      171
HEP multiprocessor, functional organization      146f
HEP multiprocessor, MIMD pipelining in      321
HEP multiprocessor, parallel-parallel-serial control      146
HEP multiprocessor, self-scheduling in      156—159
HEP multiprocessor, synchronization in      161—162
HEP multiprocessor, time multiplexing      169
HEP-1, architecture      321 322f
Hewlett — Packard Focus      28—30
Hewlett — Packard Focus, features of      28—29
Hewlett — Packard Focus, memory management      29
Hewlett — Packard Focus, operating speed      30
Hewlett — Packard Focus, pipelined      29
Hierarchical memory systems, compile-time control of      113
Hierarchical memory systems, silicon      111—112
Hierarchical system, FP machines      418—419
High-level language computer, classification of      4—6
High-level language computer, definition      3—4 see
High-level language processors      3—47
High-level languages (HLLs), examples      3
Hitachi S810, speed performance of      310t
HLLs      see "High-level languages"
Hot spots, definition      327
Hot spots, fault-avoidance of      501
HP-Focus, microcode pipeline      100f
HP-Focus, register cell design      100f
Huffman codes, definition      106
Hybrid redundancy      518
Hypercube configuration, CM-1      333
Hypercube, parallel-serial control      145
Hypertorus interconnection, FP machines      420 420f
Ibarra, Palis, Kim transformation method      481—482
IBM 308X, cache coherence      168
IBM 3090, cache coherence      168
IBM 3090, caching scheme      168
IBM 360/370, SISD architecture      180
IBM 360/370, synchronization in      161
IBM 360/91, parallelism detection by      154
IBM 360/91, time multiplexing      169
IBM 801      10—13
IBM 801, advantages of      11
IBM 801, characteristics      10
IBM 801, compiler      13
IBM 801, delayed branch in      11—12
IBM 801, history      10
IBM 801, interrupt strategy of      10—11
IBM 801, invariant code motion in      11
IBM 801, small object code size of      13
IBM APL Machine      36
IBM APL Machine, hardware implementation      37
IBM APL Machine, performance      37
IBM GF11, SIMD architecture      180
IBM/RP3      321—327
IBM/RP3, architecture      321—323
IBM/RP3, cache coherency in      324
IBM/RP3, configuration      324
IBM/RP3, flexible organization of      324
IBM/RP3, memory space and addressing in      325f
IBM/RP3, networks in      326—327
IBM/RP3, PME design      323f
Illiac IV, SIMD architecture      180
Illiac network      235—236
Illiac network, definition      235
Illiac network, diagram      235f
Illiac network, MIMD mode      236
Illiac network, SIMD mode      236
Imperative languages, characterization of      406—408 see "Pascal"
Implicit programming, for parallel processing      185
Indirect execution      5
Inmos transputer      20—22
Inmos transputer, characteristics      20
Inmos transputer, cycle time      22
Inmos transputer, instructions      20—21
Inmos transputer, large register file      21
Inmos transputer, OCCAM programming      20
Inmos transputer, support for interprocessor communication      21
Instruction fetch and decode      75—76
Instruction format design, gallium arsenide implementation      106—115
Instruction formats, compact, design of      107
Instruction formats, compact, gallium arsenide implementation      107
Instruction formats, compact, Stanford MIPS      107—108
Instruction formats, compact, Transputer      108
Instruction pipeline, silicon      97
Instruction pipelining, solution to overhead problem in multiprocessors      151
Instruction tree, definition      213
Intel      432 542—543
Intel hypercube, comparison of, with other multiprocessors      173
Intel iAPX      432 25—27
Intel iAPX, features of      27
Intel iAPX, hardware implementation of      27
Intel iAPX, microprocessing      26f
Intel, duplication and matching for fault detection      542
Intel, fault confinement regions in      542
Intel, functional redundancy checking in      543f
Intel, hardware error-logging mechanism      542
Intelligence processing, definition      308
Intelligent record operation, NON-VON supercomputer      339—340
Interchange box, states of      240f
Interconnection function, definition      228
Interconnection networks      225—264
Interconnection networks, definition      228
1 2 3 4
blank
Ðåêëàìà
blank
blank
HR
@Mail.ru
       © Ýëåêòðîííàÿ áèáëèîòåêà ïîïå÷èòåëüñêîãî ñîâåòà ìåõìàòà ÌÃÓ, 2004-2024
Ýëåêòðîííàÿ áèáëèîòåêà ìåõìàòà ÌÃÓ | Valid HTML 4.01! | Valid CSS! Î ïðîåêòå