Ãëàâíàÿ    Ex Libris    Êíèãè    Æóðíàëû    Ñòàòüè    Ñåðèè    Êàòàëîã    Wanted    Çàãðóçêà    ÕóäËèò    Ñïðàâêà    Ïîèñê ïî èíäåêñàì    Ïîèñê    Ôîðóì   
blank
Àâòîðèçàöèÿ

       
blank
Ïîèñê ïî óêàçàòåëÿì

blank
blank
blank
Êðàñîòà
blank
Milutinovic V. — Computer Architecture: Concepts and Systems
Milutinovic V. — Computer Architecture: Concepts and Systems



Îáñóäèòå êíèãó íà íàó÷íîì ôîðóìå



Íàøëè îïå÷àòêó?
Âûäåëèòå åå ìûøêîé è íàæìèòå Ctrl+Enter


Íàçâàíèå: Computer Architecture: Concepts and Systems

Àâòîð: Milutinovic V.

Àííîòàöèÿ:

Computation tasks have become more demanding of computers, requiring greater processing speed and larger data-handling capacity than ever before. This text approaches these tasks from an R&D perspective, and presents solutions that focus on improvements in the system domain. Contributed to by 20 recognized experts from the nations's leading universities and research institutions in the field of computer architecture, this book combines the latest information on uniprocessor and multiprocessor design based on their mode of use: numeric, functional, or symbolic. The authors' knowledge and experience are brought together to create a unique view of 12 carefully selected topics, including: VLSI processor architecture; RISC types; design trade-offs for processors based on both silicon and GaAs technologies; addressing solutions for increasing machine performance; concepts and systems of numeric and symbolic processing; and new advances in supercomputers, and architectural support for artificial intelligence. The book clearly presents concepts and techniques using simple but effective examples and representative case studies.


ßçûê: en

Ðóáðèêà: Computer science/

Ñòàòóñ ïðåäìåòíîãî óêàçàòåëÿ: Ãîòîâ óêàçàòåëü ñ íîìåðàìè ñòðàíèö

ed2k: ed2k stats

Ãîä èçäàíèÿ: 1988

Êîëè÷åñòâî ñòðàíèö: 575

Äîáàâëåíà â êàòàëîã: 22.02.2015

Îïåðàöèè: Ïîëîæèòü íà ïîëêó | Ñêîïèðîâàòü ññûëêó äëÿ ôîðóìà | Ñêîïèðîâàòü ID
blank
Ïðåäìåòíûé óêàçàòåëü
Interconnection networks, types      228
Invariant code motion, definition      11
journaling      502
Journaling, error recovery      520
Jover and Kailath transformation method      478—479
Knowledge processing, definition      308
Kuhn method of transformation      467—468
Kung and Lin method of transformation      466
Kung transformation method      472
Kuo, Levy, Musicus method of transformation      484—486
Lam and Mostow transformation method      463
LAMBDA      345
Lambda-reduction architectures      435
Language-corresponding architectures      34—42 see "Fairchild/ISU "IBM "MIT
Language-corresponding architectures, definition      6
Language-directed architectures      24—34
Language-directed architectures, characteristics      24—25
Language-directed architectures, definition      6
Language-directed architectures, examples      25 see "Hewlett "Intel "Motorola "National "Zilog
Languages      see also "Specific names"
Languages for MIMD processing      185—188
Languages for parallel processing      186—190
Languages, SIMD processing      188—190
Leiserson, Rose, Saxe method of transformation      482—484
Lexical processor      43—44
Li and Wah transformation method      476
Lisp      313 see
Lisp machines      345
Load balancing, resource sharing systems      268
Load estimations, window control      294
Localized state-dependent scheduling      275
LOCUS, resource scheduling in      275
Logic circuit families, gallium arsenide, performance levels      89 90
LRU technique      112
Machine code, graph-structured      425
Machine code, traditional      425
Mago's tree machine      437—440
Mago's tree machine, execution in      438—439
Mago's tree machine, expression strings in      437
Mago's tree machine, storage management in      428
Mago's tree machine, token strings in      437
Manchester dataflow machine, architecture      443f
Masking redundancy      514—516 526
Masking redundancy, uniprocessors      543—545
Massively parallel processor (MPP)      204—208
Massively parallel processor (MPP), speed of      208
Massively parallel processor (MPP), structure      204 205f
Mean time between failures, evaluation of fault-tolerant systems      507—508
Mean time to failure, fault-tolerant system evaluation      507
Memory access, cache memory      167—168
Memory access, message passing      170
Memory access, multiprocessors      166—174
Memory access, time multiplexing      169—170
Memory bandwidth, RISC architecture      55—56
Memory hierarchy      110—111
Memory references, intercluster reference, Cm* multiprocessor      199—200
Memory references, intracluster reference, Cm* multiprocessor      199
Memory references, local, Cm* multiprocessor      199—200
Memory structures, associative      195—196
Memory system design, gallium arsenide implementation      109—115
Memory systems, hierarchical      110—113
Memory systems, pipelined      113—115
Memory systems, virtual      110
Memory, compiler optimizations      120—125
Memory, content addressable, SIMD processing      193—196
Memory, hot spots in      327
Memory, interleaved structure      523
Message passing      165 170
Message-switched network vs. circuit-switched network      203
Method of alternating directions      384
Microprocessor without interlocked pipe stages (MIPS), comparison with RISC      72—74
Microprocessor without interlocked pipe stages (MIPS), data path of      77 78f
Microprocessor without interlocked pipe stages (MIPS), delayed branches      73
Microprocessor without interlocked pipe stages (MIPS), high bandwidth memory access      72—73
Microprocessor without interlocked pipe stages (MIPS), instruction formats      73
Microprocessor without interlocked pipe stages (MIPS), internal forwarding      73
Microprocessor without interlocked pipe stages (MIPS), pipelining      73
Microprocessor without interlocked piped stages      see also "Stanford MIPS"
Microprocessors, advanced      3—47
MIMD contour tracing algorithm      197
MIMD machine, configurations of      182
MIMD machine, multiple data streams in      182
MIMD machine, multiple instruction streams in      182
MIMD pipelining, HEP multiprocessor      321
MIMD processing, languages for      185—188
MIMD, parallelism mode      226
Miranker and Winkler method of transformation      469—470
MISD machine architecture      182
MIT Scheme-79/81      35—36
MIT Scheme-79/81, limitation      35—36
MIT Scheme-79/81, Lisp      35
MODFETs      86—87
Moldovan and Fortes method of transformation      468—469
Motorola 68000, serial, single-level control      142
Motorola MC68020      27—28
Motorola MC68020, CPU      28
Motorola MC68020, features of      27—28
Motorola MC68020, gallium arsenide implementation      95—96
Motorola MC68020, instruction cache      28
Motorola MC68020, performance      28
MPP, SIMD architecture      180
Multiple processor, multicomputer structure      524f
Multiple processor, multiprocessor structure      525f
Multiple-SIMD architecture      183
Multiplier/divider design, gallium arsenide      104-105
Multiply, implementation of, on RISC      109
Multiprocessing      135—177
Multiprocessing, parallelism in      149—151
Multiprocessing, partitioning      149—155
Multiprocessor architectures, types      136—137
Multiprocessor systems      309
Multiprocessors vs. vector machines      137
Multiprocessors, artificial intelligence      330—344
Multiprocessors, cache memory in      167—169
Multiprocessors, comparison of      170—174
Multiprocessors, distributed memory      360
Multiprocessors, functional programming      416—424
Multiprocessors, interconnection network      166
Multiprocessors, memory access      166—174
Multiprocessors, process execution modules in      146—147
Multiprocessors, program slicing      138f
Multiprocessors, scheduling in      155—158
Multiprocessors, supercomputing      315—330
Multiprocessors, synchronization      159—166
Multistage cube/shuffle-exchange networks      238—244
Multistage cube/shuffle-exchange networks, definition      238
Multistage cube/shuffle-exchange networks, structure      238—239
Multitasking, definition      316
N-cube interconnections, FP machines      420—421 421f
National Semiconductor NS32032      30—31
National Semiconductor NS32032, data path      30f
National Semiconductor NS32032, memory traffic reduction      31
National Semiconductor NS32032, translation look-aside buffer      31
Navier — Stokes equations, definition      382
NEC SX2, speed performance of      310t
Networks, carrier-sense-multiaccess      276—277 278
Networks, cube      229—232
Networks, data manipulator      252—258
Networks, extra-stage cube      244—252
Networks, generalized cube      239—240
Networks, IBM/RP3      326—327
Networks, Illiac      235—236
Networks, interconnection      225—264
Networks, interprocessor communication      226
Networks, multistage cube/shuffle-exchange      238—244
Networks, plus-minus 2' (PM21)      232—235
Networks, resource sharing interconnection      265
Networks, shuffle-exchange      237—238
Networks, single contention-bus      276—277
Networks, single-stage      228—238
NMOS silicon, comparison with DCFL E/D-MESFET      92
NON-VON supercomputer      338—341
NON-VON supercomputer, architecture, applications      331t
NON-VON supercomputer, configuration      339
NON-VON supercomputer, dataflow graphs in      342f
NON-VON supercomputer, I/O subsystem      340
NON-VON supercomputer, intelligent record operation of      339—340
NON-VON supercomputer, prototypes      340
Numeric processing vs. symbolic processing      309—313
Numerical supercomputing      309—311
NYU Supercomputer, control mechanisms in      171
NYU Ultracomputer      201—204
NYU Ultracomputer, comparison with other multiprocessors      172t
NYU Ultracomputer, fetch-and-add instruction in      201
NYU Ultracomputer, fetch-and-phi operation in      201—202
NYU Ultracomputer, network design      202—203
NYU Ultracomputer, omega network design of      202
NYU Ultracomputer, parallel-serial control      142
NYU Ultracomputer, self-scheduling in      158
NYU Ultracomputer, structure      201 202f
NYU Ultracomputer, switching element in      163f
OCCAM programming, Inmos transputer      20
OCCAM, partitioning mechanisms of      153
Optimal resource-allocation algorithms      277—281
Optimal resource-allocation algorithms, definition      277
Optimal-greedy window control      292—293
Packaging constraints, IC      79
Packaging constraints, influence of, on architecture      79—80
Packet-Switched Network      203
Page faults, definition      28
Parafrase compiler, detection of parallelism      153—154
Parallel 300 XR      538—539
Parallel algorithms, examples      190—198
Parallel computer systems, classification of      179—185 see "MISD" "SIMD" "SISD"
Parallel interference machine      341—344
Parallel interference machine, configuration      343f 343—344
Parallel model of computation      140
Parallel processing      178—224
Parallel processing systems, case studies      198—217
Parallel processing systems, comparison of      217—219
Parallel processing, languages for      185—190
Parallel sequencing      140—141
Parallel sequencing, data-driven execution      140—141
Parallel sequencing, demand-driven execution      141
Parallelism detection      152—154
Parallelism detection, clustering      154—155
Parallelism in multiprocessor architecture      150—151
Parallelism, control of, in ZAPP      444
Parallelism, FP machines      428—432
Parallelism, invoking, controlling in FP machines      430—431
Parallelism, MIMD      226
Parallelism, SIMD      226
Partitionability, definition      229
Partitionable-SIMD/MIMD machine      184
Partitioning, generalized cube network      242—244
Partitioning, multiprocessors      149—155
Pascal interactive direct execution computer (PASDEC)      44
PASM parallel processing system      214—219
PASM parallel processing system, configuration      214f 215—218
PASM parallel processing system, interconnection network      215
PASM parallel processing system, memory management system      217
PASM parallel processing system, memory storage system      216—217
PASM parallel processing system, memory storage unit      216f
PASM parallel processing system, microcontrollers      215
PASM parallel processing system, processing elements in      215
PE address masking scheme, definition      190
Penalty ratio, definition      6
PIM-D, ICOT, architecture, applications      331t
Pipeline interlocking, advantages      17—18
Pipeline, MIPS      17f
Pipelined computers, definition      309
Pipelined memory systems      113—115
Pipelining      74—75 113—115
Pipelining, definition      74 359
Pipelining, dynamic pipeline net in Remps      329
Pipelining, effect of, on performance      74—75
Pipelining, MIMD in HEP multiprocessor      321
Pipelining, solution to overhead problem in multiprocessors      151
Pipelining, systolic processing      457
Pluribus      539—541
Pluribus, configuration      540f
Pluribus, fault recovery in      539
Pluribus, feedback parity in      539
Plus-Minus 2' (PM21) network      232—235
Plus-Minus 2' (PM21) network, MIMD mode      234
Plus-Minus 2' (PM21) network, partitioning of      234—235
Plus-Minus 2' (PM21) network, SIMD mode      233
prefetching      122—123 124—125
Present state, definition      406
Probabilistic scheduling      274
Process execution module (PEM)      146
Processor configuration, gallium arsenide implementation      95—96
Program analysis, AeroSim      390
Program counter      77
Program description trees      377—380
Program description trees, nodes types used in      378t
Program evaluation, dataflow      425
Program evaluation, reduction      425
Program evaluation, sequential execution      424
Program representation      424—432
Program representation, dataflow graph      426
Program representation, FP machines      424—426
Program representation, graph-structured      425—426
Program representation, token-string      426
Program slicing, vector machines vs. multiprocessors      138f
PROLOG      313
Pure systolic array, definition      457
Quasifunctional programming languages      416
Queuing, multiprocessor      166—167
Quinton transformation method      473—474
Ramakrishnan, Fussell, Sillberschatz transformation method      475—476
RAND method      112
Random scheduling      274
Random-access protocols, CSMA networks      278—279
Reconfigurable duplication      517—518
Reconfigurable systems, error detection in      517—519
Reconfiguration, increases system reliability      516—519
Recursive doubling, definition      192
Rediflow system, determining parallelism in      431
Rediflow system, mapping approach in      430
Reduced architectures      see "Reduced instruction set computers"
Reduced Instruction Set Computer (RISC)      4
Reduced instruction set computer (RISC) architecture      48—83
Reduced instruction set computer (RISC) architecture, advantages      53—55
Reduced instruction set computer (RISC) architecture, characteristics      53
Reduced instruction set computer (RISC) architecture, compilers for      56
Reduced instruction set computer (RISC) architecture, disadvantages      55—56
Reduced instruction set computer (RISC) architecture, effect of, on hardware technology      56
Reduced instruction set computer (RISC) architecture, examples      56—74
Reduced instruction set computer (RISC) architecture, history      48
Reduced instruction set computer (RISC) architecture, memory bandwidth issue      55—56
Reduced instruction set computer (RISC), comparison with MIPS      72—74
Reduced instruction set computer (RISC), delayed branches      73
Reduced instruction set computer (RISC), gallium arsenide processor design      106
Reduced instruction set computer (RISC), high bandwidth memory access      72—73
Reduced instruction set computer (RISC), implementation of multiply on      109
Reduced instruction set computer (RISC), instruction set      60—61f
Reduced instruction set computer (RISC), instruction sets      73
Reduced instruction set computer (RISC), internal forwarding      73
Reduced instruction set computer (RISC), pipelining      73
Reduced instruction set computers (RISC)      see also "IBM 801" "Inmos "Ridge "RIMMS" "Stanford "UC
Reduced instruction set computers (RISC), compilers for      8
Reduced instruction set computers (RISC), data reusability in      8—9
Reduced instruction set computers (RISC), examples of      9—10
Reduced instruction set computers (RISC), execution speed of      7
Reduced instruction set computers (RISC), floating point calculations in      7
Reduced instruction set computers (RISC), instruction sets for      7
Reduced instruction set computers (RISC), large register files used by      8
Reduced instruction set computers (RISC), load/store organization of      8
Reduced instruction set computers (RISC), penalty ratio of      6
1 2 3 4
blank
Ðåêëàìà
blank
blank
HR
@Mail.ru
       © Ýëåêòðîííàÿ áèáëèîòåêà ïîïå÷èòåëüñêîãî ñîâåòà ìåõìàòà ÌÃÓ, 2004-2024
Ýëåêòðîííàÿ áèáëèîòåêà ìåõìàòà ÌÃÓ | Valid HTML 4.01! | Valid CSS! Î ïðîåêòå