Ãëàâíàÿ    Ex Libris    Êíèãè    Æóðíàëû    Ñòàòüè    Ñåðèè    Êàòàëîã    Wanted    Çàãðóçêà    ÕóäËèò    Ñïðàâêà    Ïîèñê ïî èíäåêñàì    Ïîèñê    Ôîðóì   
blank
Àâòîðèçàöèÿ

       
blank
Ïîèñê ïî óêàçàòåëÿì

blank
blank
blank
Êðàñîòà
blank
Duffy J. — Concurrent Programming on Windows
Duffy J. — Concurrent Programming on Windows



Îáñóäèòå êíèãó íà íàó÷íîì ôîðóìå



Íàøëè îïå÷àòêó?
Âûäåëèòå åå ìûøêîé è íàæìèòå Ctrl+Enter


Íàçâàíèå: Concurrent Programming on Windows

Àâòîð: Duffy J.

Àííîòàöèÿ:

Simply put, Joe Duffy is a world-wide authority on this topic. I don't just say that based on working with him (his office was 5 offices down the hall from mine), but also based on the conversations I've seen him dominate with other visionaries inside of Microsoft. If you don't take my word for it, just look at how elegant the Parallel Extensions to the .NET framework are that Joe was the dev lead and architect on. If you want to see examples of how clearly he can put his thoughts to writing, just visit his blog (http://www.bluebytesoftware.com/). 5 stars are not enough for this seminal piece of work.


ßçûê: en

Ðóáðèêà: Computer science/

Ñòàòóñ ïðåäìåòíîãî óêàçàòåëÿ: Ãîòîâ óêàçàòåëü ñ íîìåðàìè ñòðàíèö

ed2k: ed2k stats

Ãîä èçäàíèÿ: 2008

Êîëè÷åñòâî ñòðàíèö: 985

Äîáàâëåíà â êàòàëîã: 18.08.2014

Îïåðàöèè: Ïîëîæèòü íà ïîëêó | Ñêîïèðîâàòü ññûëêó äëÿ ôîðóìà | Ñêîïèðîâàòü ID
blank
Ïðåäìåòíûé óêàçàòåëü
Iterations, dynamic (on demand) decomposition      669—675
Iterations, static decomposition and      662—663
Iterations, striped      667—669
itonly field modifier      34—35
Java, exiting and entering CLR locks      274—275
Java, JSR133 memory model specification      509—510
KD.EXE (Kernel Debugger)      251
Kernel Debugger (KD.EXE)      251
Kernel synchronization in managed code      204—208
Kernel synchronization, asynchronous procedure calls      208—210
Kernel synchronization, auto-reset and manual-reset events      see "Auto-reset events" "Manual-reset
Kernel synchronization, debugging kernel objects      250—251
Kernel synchronization, mutex/semaphore example      224—226
Kernel synchronization, overview of      183—184
Kernel synchronization, signals and waiting      184—204 241—250
Kernel synchronization, signals and waiting, CoWaitForMultipleHandles      202—203
Kernel synchronization, signals and waiting, example of      243—244
Kernel synchronization, signals and waiting, message waits      195—198
Kernel synchronization, signals and waiting, MsgWaitForMultipleObjects(Ex)      198—202
Kernel synchronization, signals and waiting, overview of      184—186 241—243
Kernel synchronization, signals and waiting, reasons to use kernel objects      186—189
Kernel synchronization, signals and waiting, waiting in native code      189—195
Kernel synchronization, signals and waiting, when to pump messages      203—204
Kernel synchronization, signals and waiting, with auto-reset events      244—248
Kernel synchronization, signals and waiting, with manual-reset events      248—250
Kernel synchronization, using mutexes      211—219
Kernel synchronization, using semaphores      219—224
Kernel synchronization, using sparingly      253
Kernel synchronization, waitable timers      see "Waitable timers"
Kernel, fibers and      430
Kernel, overview of      183—184
Kernel, reasons to use for synchronization      186—189
Kernel, support for true waiting in      64—65
Kernel, synchronization-specific      184
Kernel-mode APCs      208—209
Kernel-mode stacks      82
Keyed events      268—270 289
KTHREAD      145—146 152
Lack of preemption      576 577
Lamport's bakery algorithm      54—55
Latch      66
Latent concurrency      5 867
Layers, parallelism      8—10
lazy allocation      267—268
Lazy futures      689
Lazy initialization in .NET      520—527
Lazy initialization in VC++      528—534
LazyInit<T>      917—919
LeaveCriticalSection, ensuring thread always leaves      261—263
LeaveCriticalSection, fibers and      449
LeaveCriticalSection, leaving critical section      260—261
LeaveCriticalSection, leaving unowned critical section      261
LeaveCriticalSection, low resource conditions and      267—268
LeaveCriticalSection, process shutdown      563—564
LeaveCriticalSectionWhenCallbackReturns      350—351
Leveled locks      see "Lock leveling"
LFENCE (Load fence)      512
Libraries, designing reusable      865—886
Libraries, designing reusable, blocking      884—885
Libraries, designing reusable, further reading      885
Libraries, designing reusable, locking models      867—870
Libraries, designing reusable, major themes      866—867
Libraries, designing reusable, reliability      875—879
Libraries, designing reusable, scalability and performance      881—884
Libraries, designing reusable, scheduling and threads      879—881
Libraries, designing reusable, using locks      870—875
Linear pipelines      711
Linear speedups      758—760
Linearizability, managing state with      30—31
Linearization point      30 520
lInitialCount parameter      222
Linked lists      617—620 621—626
LINQ      see "PLINQ (Parallel LINQ)"
LIST_HEADER data structure      538—540
Livelocks, concurrency causing      11
Livelocks, implementing critical regions without      47
Livelocks, overview of      601—603
Liveness hazards      572—609
Liveness hazards, deadlock      572—597
Liveness hazards, deadlock, avoiding      575—577
Liveness hazards, deadlock, avoiding with lock leveling      581—589
Liveness hazards, deadlock, avoiding with The Banker's Algorithm      577—582
Liveness hazards, deadlock, detecting      589—590
Liveness hazards, deadlock, detecting with timeouts      594
Liveness hazards, deadlock, detecting with Vista WCT      594—597
Liveness hazards, deadlock, detecting with Wait Graph Algorithm      590—594
Liveness hazards, deadlock, examples of      572—575
Liveness hazards, defined      545
Liveness hazards, livelocks      601—603
Liveness hazards, lock convoys      603—605
Liveness hazards, missed wake-ups      597—601
Liveness hazards, priority inversion and starvation      608—609
Liveness hazards, stampedes      605—606
Liveness hazards, two-step dance      606—608
lMaximumCount parameter, CreateSemaphore      222
Load balanced pipelines      716—717
Load fence (LFENCE)      512
Load-after-store dependence      485
Loader lock      116
Loads, .NET memory models and      516—518
Loads, atomic      487—492 499—500
Loads, hardware memory models and      511
Loads, imbalances, and speed-up      765—766
LocalDataStoreSlot, TLS      123
LocalPop, work stealing queue      637
LocalPush, work stealing queue      637 640
Lock convoys      165 289 603—605
Lock free algorithms      28
Lock free FIFO queue      632—636
Lock free programming, defined      477
Lock free programming, designing reusable libraries      882
Lock free programming, overview of      517—520
Lock free reading, dictionary (hashtable)      627—631
Lock freedom      518—519 see
Lock hierarchies      see "Lock leveling"
Lock leveling, avoiding deadlock with      875—876
Lock leveling, examples of using      582—584
Lock leveling, inconvenience of      582
Lock leveling, LOCK_TRACING symbol in      589
Lock leveling, overview of      581
Lock leveling, sample implementation in .NET      584—589
Lock ordering      see "Lock leveling"
Lock ranking      see "Lock leveling"
lock statement      870
Lock-free data structures      632—640
Lock-free data structures, general-purpose lock free FIFO queue      632—636
Lock-free data structures, parallel containers and      615
Lock-free data structures, work stealing queue      636—640
LockFreeQueue<T> class      632—636
Locking models, libraries      867—870
Locking models, libraries, documenting      870
Locking models, libraries, protecting instant state      868—869
Locking models, libraries, protecting static state      867—868
Locking models, libraries, using isolation and immutability      869—870
LockRecursionPolicy, ReaderWriterLockSlim      294
Locks      see also "Interlocked operations"
Locks and process shutdown      870—875 see locks
Locks as concurrency problem      10
locks command (!)      271
Locks, as unfair in newer OSs      217
Locks, CLR      272—287
Locks, CLR, debugging monitor ownership      285—287
Locks, CLR, defining      254
Locks, CLR, entering and leaving      272—281
Locks, CLR, monitor implementation      283—285
Locks, CLR, overview of      272
Locks, CLR, reliability and monitors      281—283
Locks, deadlocks without      574—575
Locks, Mellor — Crummey — Scott (MSC)      778—781
Locks, simultaneous multilock acquisition      578—581
Locks, spin only      772—778
Locks, two-phase protocols for      767—769
LOCK_TRACING symbol, lock leveling      589
Loop blocking      678
Loops, data parallelism and      659—661
Loops, deciding to igo paralleli and      756—757
Loops, loop blocking      678
Loops, mapping over input data as application of parallel loops      675—676
Loops, Nesting loops      677—678
Loops, prerequisites for parallelizing      662
Loops, reductions and scans with      678—681
Low resource conditions      266—270 290—291
Low-cost, implementing critical regions with      47
Low-lock code examples      520—541
Low-lock code examples, Decker's algorithm      540—541
Low-lock code examples, lazy initialization      520—527 528—534
Low-lock code examples, nonblocking stack and ABA problem      534—537
Low-lock code examples, Win32 singly linked lists (Slists)      538—540
lpName argument, mutex      213
lpParameter argument, converting threads into fibers      438—439
lpParameter argument, CreateFiber(Ex)      435—437
lpParameter argument, CreateThread      91
lpPreviousCount, ReleaseSemaphore      223—224
lpThreadAttributes, CreateThread      90
lpThreadld parameter, CreateThread API      92—93
LPVOID parameter, converting threads into fibers      438
LPVOID parameter, CreateFiber(Ex)      436
LPVOID parameter, CreateThread API      91
LPVOID value, TLS      118—119
Managed code      see also "CLR"
Managed code, aborting threads      109—113
Managed code, APCs and lock reliability in      878
Managed code, fiber support not available for      429 433
Managed code, kernel synchronization in      204—208
Managed code, overview of      85—87
Managed code, process shutdown      569—571
Managed code, thread local storage      121—124
Managed code, triggering thread exit      103
Managed code, using CLR thread pool in      see "CLR thread pool"
Managed debugging assistant (MDA)      575
ManagedThreadId property      101
Manual-reset events      226—234
Manual-reset events, creating and opening events      228—230
Manual-reset events, events and priority boosts      232—234
Manual-reset events, implementing queue with      248—250
Manual-reset events, overview of      226—227
Manual-reset events, setting and resetting events      230—231
ManualResetEventSlim      919—920
Map/reduce paradigm      658
Mapping over input data      675—676
Marshal-by-bleed      279
Marshal.GetLastWin32Error      881
MarshalByRefObject      279
Maximum count, semaphores      222
Maximum threads, CLR thread pool      379—382
Maximum threads, deadlocks from low      382—385
Maximum threads, Vista thread pool      344 348 353
MAXIMUM_WAIT_OBD ECTS, blocking and pumping messages      202
MAXIMUM_WAIT_OBD ECTS, registering wait callbacks in thread pools      322—323
MAXIMUM_WAIT_OBD ECTS, waiting in Win32      190
MaxStackSize, creating threads in .NET      99
MaxStackSize, specifying stack changes      132
MaxStackSize, TaskManagerPolicy      903
MDA (managed debugging assistant)      575
Measuring, speedup efficiency      761—762
Mellor — Crummey — Scott (MSC) locks      778—781
Memory consistency models      506—520
Memory consistency models, .NET memory models      516—518
Memory consistency models, hardware memory models      509—511
Memory consistency models, lock free programming      518—520
Memory consistency models, memory fences      511—515
Memory consistency models, overview of      506—508
Memory fences      511—515
Memory fences, creating in programs      513—515
Memory fences, double-checked locking in VC++ and      528
Memory fences, hardware memory models and      510
Memory fences, interlocked operations implying      492
Memory fences, overview of      511
Memory fences, release-followed-by-acquire-fence hazard      515
Memory fences, types of      511—513
Memory load and store reordering      478—486
Memory load and store reordering, critical regions as fences      484—485
Memory load and store reordering, impact of data dependence on      485—486
Memory load and store reordering, overview of      478—480
Memory load and store reordering, what can go wrong      481—484
Memory models and lock freedom      506—543
Memory models and lock freedom, .NET memory models      516—518
Memory models and lock freedom, defining      59—60
Memory models and lock freedom, hardware atomicity      see "Hardware atomicity"
Memory models and lock freedom, hardware memory models      509—511
Memory models and lock freedom, lock free programming      518—520
Memory models and lock freedom, low-lock code examples      see "Low-lock code examples"
Memory models and lock freedom, memory fences      511—515
Memory models and lock freedom, memory load and store reordering      478—486
Memory models and lock freedom, overview of      477—478
Memory, slim reader/writer locks and      289
Memory, stack reserve/commit sizes and      130—133
Merging, PLINQ      912—914
Message loops      see "Message pumps"
Message passing      71—73
Message passing interface (MPI)      720
Message pumps, GUI and COM      195—198
Message pumps, overview of      830—833
Message-based parallelism      658 719—720
MFENCE (full fence)      512—515
Microprocessor architectures      178—179
Microsoft kernel debuggers      271
Microsoft SQL Server      433
Microsoft Windows Internals (Russinovich and Solomon)      145 154
minFreeThreads element, httpRuntime      384—385
Minimum threads, CLR thread pool      379—382
Minimum threads, delays from low      385—386
Minimum threads, Vista thread pool      344 348 353
MinProcessors, TaskManagerPolicy      903
Missed pulses      597—601
Missed wake-ups      597—601
MMCSS (multimedia class scheduler service)      167
Modal loop, GUIs      198
Modeling      4
Monitor, creating fences      514
Monitor.Enter method, avoiding blocking      275—277
Monitor.Enter method, CLR locks      272—273
Monitor.Enter method, ensuring thread always leaves monitor      273—275
Monitor.Enter method, locking onAppDomain agile objects      279
Monitor.Enter method, reliability and CLR monitors      281—283
Monitor.Enter method, using value types      277—278
Monitor.Exit method, avoiding blocking      275—277
Monitor.Exit method, CLR locks      272—273
Monitor.Exit method, ensuring thread always leaves monitor      273—275
Monitor.Exit method, using value types      277—278
Monitors, .NET Framework      68—70 309—312
Monitors, CLR, avoiding blocking      275—276
Monitors, CLR, exiting and entering      272—275
Monitors, CLR, implementing      283—285
Monitors, CLR, overview of      272
Monitors, CLR, reliability and      281—283
Monitors, CLR, using value types      277—278
MPI (message passing interface)      720
MSC (Mellor — Crummey — Scott) locks      778—781
MSDN Magazine      590
MsgWaitForMultipleObjects(Ex) API, kernel synchronization      198—202
MsgWaitForMultipleObjects(Ex) API, motivation for using      833
MsgWaitForMultipleObjects(Ex) API, waiting for managed code      207
MTAs (multithreaded apartments)      575 834—835
MTAThreadAttribute      835
MultiLockHelper.Enter      578
Multimedia class scheduler service (MMCSS)      167
Mutants      see "Mutexes"
Mutexes      211—219
Mutexes, abandoned      217—219
1 2 3 4 5 6 7 8
blank
Ðåêëàìà
blank
blank
HR
@Mail.ru
       © Ýëåêòðîííàÿ áèáëèîòåêà ïîïå÷èòåëüñêîãî ñîâåòà ìåõìàòà ÌÃÓ, 2004-2024
Ýëåêòðîííàÿ áèáëèîòåêà ìåõìàòà ÌÃÓ | Valid HTML 4.01! | Valid CSS! Î ïðîåêòå