Advanced concepts in operating systems : distributed, database, and multiprocessor operating systems / (Record no. 2782)
[ view plain ]
000 -LEADER | |
---|---|
fixed length control field | 17082cam a2200181 a 4500 |
020 ## - INTERNATIONAL STANDARD BOOK NUMBER | |
International Standard Book Number | 9780070472686 (acidfree paper) : |
040 ## - CATALOGING SOURCE | |
Transcribing agency | CUS |
082 00 - DEWEY DECIMAL CLASSIFICATION NUMBER | |
Classification number | 005.23 |
100 1# - MAIN ENTRY--PERSONAL NAME | |
Personal name | Singhal, Mukesh. |
245 10 - TITLE STATEMENT | |
Title | Advanced concepts in operating systems : distributed, database, and multiprocessor operating systems / |
Statement of responsibility, etc. | Mukesh Singhal and Niranjan G. Shivaratri. |
260 ## - PUBLICATION, DISTRIBUTION, ETC. (IMPRINT) | |
Place of publication, distribution, etc. | New York : |
Name of publisher, distributor, etc. | McGraw-Hill, |
Date of publication, distribution, etc. | c1994. |
300 ## - PHYSICAL DESCRIPTION | |
Extent | xxii, 522 p. : |
Other physical details | ill. ; |
Dimensions | 25 cm. |
440 #0 - SERIES | |
Title | McGraw-Hill series in computer science |
504 ## - BIBLIOGRAPHY, ETC. NOTE | |
Bibliography, etc | Includes bibliographical references and index. |
505 ## - FORMATTED CONTENTS NOTE | |
Formatted contents note | 1 Overview<br/>1.1 Introduction<br/>1.2 Functions of an Operating System<br/>1.3 Design Approaches<br/>1.3.1 Layered Approach<br/>1.3.2 The Kernel Based Approach<br/>1.3.3 The Virtual Machine Approach<br/>1.4 Why Advanced Operating Systems<br/>1.5 Types of Advanced Operating Systems<br/>1.6 An Overview of the Book<br/>1.7 Summary<br/>1.8 Further Reading<br/>References<br/>2 Synchronization Mechanisms<br/>2.1 Introduction<br/>2.2 Concept of a Process<br/>2.3 Concurrent Processes<br/>2.3.1 Threads<br/>2.4 The Critical Section Problem<br/>2.4.1 Early Mechanisms for Mutual Exclusion<br/>2.4.2 Semaphores<br/>2.5 Other Synchronization Problems<br/>2.5.1 The Dining Philosophers Problem<br/>2.5.2 The Producer-Consumer Problem<br/>2.5.3 The Readers-Writers Problem<br/>2.5.4 Semaphore Solution to Readers-Writers Problem<br/>2.6 Language Mechanisms for Synchronization<br/>2.6.1 Monitors<br/>2.6.2 Serializers<br/>2.6.3 Path Expressions<br/>2.6.4 Communicating Sequential Processes (CSP)<br/>2.6.5 Ada Rendezvous<br/>in Axiomatic Verification of Parallel Programs<br/>2.7.1 The Language<br/>2.7.2 The Axioms<br/>2.7.3 Auxiliary Variables<br/>2.7.4 An Example: Proof of Mutual Exclusion<br/>2.8 Summary<br/>2.9 Further Reading<br/>Problems<br/>References<br/>3 Process Deadlocks<br/>3.1 Introduction<br/>3.2 Preliminaries<br/>3.2.1 Definition<br/>3.2.2 Deadlock versus Starvation<br/>3.2.3 Fundamental Causes of Deadlocks<br/>3.2.4 Deadlock Handling Strategies<br/>3.3 Models of Deadlocks<br/>3.3.1 The Single-Unit Requesl Model<br/>3.3.2 The AND Request Model<br/>3.3.3 The OR Request Model<br/>3.3.4 The AND-OR Request Model<br/>3.3.5 The P-out-of-Q Request Model<br/>3.4 Models of Resources<br/>3.4.1 Types of Resources<br/>3.4.2 Types of Resource Accesses<br/>3.5 A Graph-Theoretic Model of a System State<br/>3.5.1 General Resource Systems<br/>3.5.2 General Resource Graph<br/>3.5.3 Operations on the General Resource Graph<br/>3.6 Necessary and Sufficient Conditions for a Deadlock<br/>3.6.1 The Graph Reduction Method<br/>3.7 Systems with Single-Unit Requests<br/>3.8 Systems with only Consumable Resources<br/>3.9 Systems with only Reusable Resources<br/>3.9.1 Systems with Single-Unit Resources<br/>3.9.2 Deadlock Detection<br/>3.9.3 Deadlock Prevention<br/>3.9.4 Deadlock Avoidance<br/>3.9.5 Pros and Cons of Different Strategies<br/>3.10 Summary<br/>3.11 Further Reading<br/>Problems<br/>References<br/>Part n Distributed Operating Systems<br/>4 Architectures of Distributed Systems<br/>4.1 Introduction<br/>4.2 Motivations<br/>4.3 System Architecture Types<br/>4'.4 Distributed Operating Systems<br/>4.5<br/>Issues in<br/>Distributed Operating Systems<br/>4.6<br/>4.5.1<br/>Global Knowledge<br/>4.5.2<br/>Naming<br/>4.5.3<br/>Scalability<br/>4.5.4<br/>Compatibility<br/>4.5.5<br/>Process Synchronization<br/>4.^6<br/>Resource Management<br/>4.5.7<br/>Seciuity<br/>4.5.8<br/>Structuring<br/>4.5.9<br/>Client-server Computing Model<br/>Communication Networks<br/>4.6.1<br/>\^de Area Networks<br/>4.6.2<br/>Local Area Networks<br/>4.7 Communication Primitives<br/>4.7.1 The Message Passing Model<br/>4.7.2 Remote Procedure Calls<br/>4.7.3 Design Issues in RPC<br/>4.8 Summary<br/>4.9 Further Reading \<br/>References'<br/>5 Theoretical Foundations<br/>5.1 Introduction<br/>5.2 Inherent Limitations of a Distributed System<br/>5.2.1 Absence of a Global Clock<br/>5.2.2 Absence of Shared Memory<br/>5.3 Lamport's Logical Clocks<br/>5.3.1 A Limitation of Lamport's Clocks<br/>5.4 Vector Clocks<br/>5.5 Causal Ordering of Messages<br/>5.6 Global State<br/>5.6.1 Chandy-Lamport's Global State Recording Algorithm<br/>5.7 Cuts of a Distributed Computation<br/>5.8 Termination Detection<br/>5.9 Summary<br/>5.10 Further Reading<br/>Problems<br/>References<br/>6 Distributed Mutual Exclusion<br/>6.1 Introduction<br/>6.2 The Classification of Mutual Exclusion Algorithms<br/>6.3 Preliminaries<br/>6.3.1 Requirements of Mutual Exclusion Algorithms<br/>6.3.2 How to Measure Performance<br/>6.4 A Simple Solution to Distributed Mutual Exclusion<br/>6.5 Non-Token-Based Algorithms<br/>6.6 Lamport's Algorithm<br/>6.7 The Ricart-Agrawala Algorithm<br/>6.8 Maekawa's Algorithm<br/>6.9 A Generalized Non-Token-Based Algorithm<br/>6.9.1 Information Structures<br/>6.9.2 The Generalized Algorithm<br/>6.9.3 Static versus Dynamic Information Structures<br/>6.10 Token-Based Algorithms<br/>6.11 Suzuki-Kasami's Broadcast Algorithm<br/>6.12 Singhal's Heuristic Algorithm<br/>6.13 Raymond's Tree-Based Algorithm<br/>6.14 A Comparative Performance Analysis<br/>6.14.1 Response Time<br/>6.14.2 Synchronization Delay<br/>6.14.3 Message Traffic<br/>6.14.4 Universal Performance Bounds<br/>6.15 Summary<br/>6.16 Further Reading<br/>Problems<br/>References<br/>7 Distributed Deadlock Detection<br/>7.1 Introduction<br/>7.2 Preliminaries<br/>7.2.1 The System Model<br/>7.2.2 Resource versus Communication Deadlocks<br/>7.2.3 A Graph-Theoretic Model<br/>7.3 Deadlock Handling Strategies in Distributed Systems<br/>7.3.1 Deadlock Prevention<br/>7.3.2 Deadlock Avoidance<br/>7.3.3 Deadlock Detection<br/>7.4 Issues in Deadlock Defection and Resolution<br/>7.5 Control Organizations for Distributed Deadlock Detection<br/>7.5.1 Centralized Control<br/>7.5.2 Distributed Control<br/>7.5.3 Hierarchical Control<br/>7.6 Centralized Deadlock-Detection Algorithms<br/>7.6.1 The Completely Centralized Algorithm<br/>7.6.2 The Ho-Ramamoorthy Algorithms<br/>7.7 Distributed Deadlock Detection Algorithms<br/>7.7.1 A Path-Pushing Algorithm<br/>7.7.2 An Edge-Chasing Algorithm<br/>7.7.3 A Diffusion Computation Based Algorithm<br/>7.7.4 A Global State Detection Based Algorithm<br/>7.8 Hierarchical Deadlock Detection Algorithms<br/>7.8.1 The Menasce-Muntz Algorithm<br/>7.8.2 The Ho-Ramamoorthy Algorithm<br/>7.9 Perspective<br/>7.10 Summary<br/>7.11 Further Reading<br/>Problems<br/>References<br/>8 Agreement Protocols<br/>8.1 Introduction<br/>8.2 The System Model<br/>8.2.1 Synchronous versus Asynchronous Computations<br/>8.2.2 Model of Processor Failures<br/>8.2.3 Authenticated versus Non-Authenticated Messages<br/>8.2.4 Performance Aspects<br/>8.3 A Classification of Agreement Problems<br/>8.3.1 The Byzantine Agreement Problem<br/>8.3.2 The Consensus Problem<br/>8.3.3 The Interactive Consistency Problem<br/>8.3.4 Relations Among the Agreement Problems<br/>8.4 Solutions to the Byzantine Agreement Problem<br/>8.4.1 The Upper Bound on the Number of Faulty Processors<br/>8.4.2 An Impossibility Result<br/>8.4.3 Lamport-Shostak-Pease Algorithm<br/>8.4.4 Dolev et al.'s Algorithm<br/>8.5 Applications of Agreement Algorithms<br/>8.5.1 Fault-Tolerant Clock Synchronization<br/>8.5.2 Atomic Commit in DDBS<br/>8.6 Summary<br/>8.7 Further Reading<br/>Problems<br/>References<br/>Part III Distributed Resource Management<br/>9 Distributed File Systems<br/>9.1 Introduction<br/>9.2 Architecture<br/>9.3 Mechanisms for Building Distributed File Systems<br/>9.3.1 Mounting<br/>9.3.2 Caching<br/>9.3.3 Hints<br/>9.3.4 Bulk Data Transfer<br/>9.3.5 Encryption<br/>9.4 Design Issues<br/>9.4.1 Naming and Name Resolution<br/>9.4.2 Caches on Disk or Main Memory<br/>9.4.3 Writing Policy<br/>9.4.4 Cache Consistency<br/>9.4.5 Availability<br/>9.4.6 Scalability<br/>9.4.7 Semantics<br/>9.5 Case Studies<br/>9.5.1 The Sun Network File System<br/>9.5.2 The Sprite File System<br/>9.5.3 Apollo DOMAIN Distributed File System<br/>9.5.4 Coda<br/>9.5.5 The x-Kemel Logical File System<br/>9.6 Log-Structured File Systems<br/>9.6.1 Disk Space Management<br/>9.7 Summary<br/>9.8 Further Readings<br/>Problems<br/>References<br/>10 Distributed Shared Memory<br/>10.1 Introduction<br/>10.2 Architecture and Motivation .<br/>10.3 Algorithms for Implementing DSM<br/>10.3.1 The CentralrSeWer Algorithm<br/>10.3.2 The Migration Algorithm<br/>10.3.3 The Read-Replication Algorithm<br/>10.3.4 The Full-Replication Algorithm<br/>10.4 Memory Coherence<br/>10.5 Coherence Protocols<br/>10.5.1 Cache Coherence in the PLUS System<br/>10.5.2 Unifying Synchronization and Data Transfer in Clouds<br/>10.5.3 Type-Specific Memory Coherence in the Munin System<br/>10.6 Design Issues<br/>10.6.1 Granularity<br/>10.6.2 Page Replacement<br/>10.7 Case Studies<br/>10.7.1 IVY<br/>10.7.2 Mirage<br/>10.7.3 Clouds<br/>10.8 Summary<br/>10.9 Further Reading<br/>Problems<br/>References<br/>11 Distributed Scheduling<br/>11.1 Introduction<br/>11.2 Motivation<br/>11.3 Issues in Load Distributing<br/>11.3.1 Load<br/>11.3.2 Classification of Load Distributing Algorithms<br/>11.3.3 Load Balancing versus Load Sharing<br/>11.3.4 Preemptive versus Nonpreemptive Transfers<br/>11.4 Components of a Load Distributing Algorithm<br/>11.4.1 Transfer Policy<br/>11.4.2 Selection Policy<br/>11.4.3 Location Policy<br/>11.4.4 Information Policy<br/>11.5 Stability<br/>11.5.1 The Queuing-Theoretic Perspective<br/>11.5.2 The Algorithmic Perspective<br/>11.6 . Load Distributing Algorithms<br/>11.6.1 Sender-Initiated Algorithms<br/>11.6.2 Receiver-Initiated Algorithms<br/>11.6.3 Symmetrically Initiated Algorithms<br/>11.6.4 Adaptive Algorithms<br/>11.7 Performance Comparison<br/>11.7.1 Receiver-initiated versus Sender-initiated Load Sharing<br/>11.7.2 Symmetrically Initiated Load Sharing<br/>11.7.3 Stable Load Sharing Algorithms<br/>11.7.4 Performance Under Heterogeneous Workloads<br/>11.8 Selecting a Suitable Load Sharing Algorithm<br/>11.9 Requirements for Load Distributing<br/>11.10 Load Sharing Policies: Case Studies<br/>11.10.1 The V-System<br/>11.10.2 The Sprite System<br/>11.10.3 Condor<br/>11.10.4 The Stealth Distributed Scheduler<br/>11.11 Task Migration<br/>11.12 Issues in Task Migration<br/>11.12.1 State Transfer<br/>11.12.2 Location Transparency<br/>11.12.3 Structure of a Migration Mechanism<br/>11.12.4 Performance<br/>11.13 Summary<br/>11.14 Further Reading<br/>Problems<br/>References<br/>Part IV Failure Recovery and Fault Tolereuice<br/>12 Recovery<br/>12.1 Introduction<br/>12.2 Basic Concepts<br/>12.3 Classification of Failures<br/>12.4 Backward and Forward Error Recovery<br/>12.5 Backward-Error Recovery: Basic Approaches<br/>12.5.1 The Operation-Based Approach<br/>12.5.2 State-based Approach<br/>12.6 Recovery in Concurrent Systems<br/>12.6.1 Orphan Messages and the Domino Effect<br/>12.6.2 Lost Messages<br/>12.6.3 Problem of Livelocks<br/>12.7 Consistent Set of Checkpoints<br/>12.7.1 A Simple Method for Taking a Consistent Set of Checkpoints<br/>12.8 Synchronous Checkpointing and Recovery<br/>12.8.1 The Checkpoint Algorithm<br/>12.8.2 The Rollback Recovery Algorithm<br/>12.9 Asynchronous Checkpointing and Recovery<br/>12.9.1 A Scheme for Asynchronous Checkpointing and Recovery<br/>12.10 Checkpointing for Distributed Database Systems<br/>12.10.1 An Algorithm for Checkpointing in a DDES<br/>12.11 Recovery in Replicated Distributed Database Systems<br/>12.11.1 An Algorithm for Site Recovery<br/>12.12 Sununary<br/>12.13 Further Readings<br/>Problems<br/>References<br/>13 Fault Tolerance<br/>13.1 Introduction<br/>13.2 Issues<br/>13.3 Atomic Actions and Conunitting<br/>13.4 Commit Protocols<br/>13.4.1 The Two-Phase Commit Protocol<br/>13.5 Nonblocking Commit Protocols<br/>13.5.1 Basic Idea<br/>13.5.2 The Nonblocking Commit Protocol for Single Site Failure<br/>13.5.3 Multiple Site Failures and Network Partitioning<br/>13.6 Voting Protocols<br/>13.6.1 Static Voting<br/>13.7 Dynamic Voting Protocols<br/>13.8 The Majority Based Dynamic Voting Protocol<br/>13.9 Dynamic Vote Reassignment Protocols<br/>13.9.1 Autonomous Vote Reassignment<br/>13.9.2 Vote Increasing Policies<br/>13.9.3 Balance of Voting Power<br/>13.10 Failure Resilient Processes<br/>13.10.1 Backup Processes<br/>13.10.2 Replicated Execution<br/>13.11 Reliable Communication<br/>13.11.1 Atomic Broadcast<br/>13.12 Case Studies<br/>13.12.1 Targon/32: Fault Tolerance Under UNIX<br/>13.13 Summary<br/>13.14 Further Reading<br/>Problems<br/>References<br/>Part V Protection and Security<br/>14 Resource Security and Protection: Access and Flow Control<br/>14.1 Introduction<br/>14.2 Preliminaries<br/>14.2.1 Potential Security X^olations<br/>14.2.2 External versus Internal Security<br/>14.2.3 Policies and Mechanisms<br/>a<br/>14.2.4 Protection Domain -<br/>14.2.5 Design Principles for Secure Systems<br/>14.3 The Access Matrix Model<br/>14.4 Implementations of Access Matrix<br/>14.4.1 Capabilities<br/>14.4.2 The Access Control List Method<br/>14.4.3 The Lock-Key Method<br/>14.5 Safety in the Access Matrix Model<br/>14.5.1 Changing the Protection State<br/>14.5.2 Safety in the Access Matrix Model<br/>14.6 Advanced Models of Protection<br/>14.6.1 The Take-Grant Model<br/>14.6.2 Bell-LaPadula Model<br/>14.6.3 Lattice Model of Information Flow<br/>14.7 Case Studies<br/>14.7.1 The UNIX Operating System<br/>14.7.2 The Hydra Kernel<br/>14.7.3 Amoeba<br/>14.7.4 Andrew<br/>14.8 Summary<br/>14.9 Further Reading<br/>Problems<br/>References<br/>15 Data Security: Cryptography<br/>15.1 Introduction<br/>15.2 A Model of Cryptography<br/>15.2.1 Terms and Definitions<br/>15.2.2 A Model of Cryptographic Systems<br/>15.2.3 A Classification of Cryptographic Systems<br/>15.3 Conventional Cryptography<br/>15.4 Modem Cryptography<br/>15.5 Private Key Cryptography: Data Encryption Standard<br/>15.5.1 Data Encryption St^dard G5ES)<br/>15.5.2 Cipher Block Chaining<br/>15.6 Public Key Cryptography<br/>15.6.1 Implementation Issues<br/>15.6.2 The Rivest-Shamir-Adleman Method<br/>15.6.3 Signing Messages<br/>15.7 Multiple Encryption<br/>15.8 Authentication in Distributed Systems<br/>15.8.1 Authentication Servers<br/>15.8.2 Establishing Interactive Connections<br/>15.8.3 Performing One-Way Communication<br/>15.8.4 Digital Signatures<br/>15.9 Case Study: The Kerberos System<br/>15.9.1 Phase I: Getting the Initial Ticket<br/>15.9.2 Phase 11: Getting Server Tickets<br/>15.9.3 Phase HI: Requesting the Service<br/>15.10 Sununary<br/>15.11 Further Readings<br/>Problems<br/>References<br/>Part VI Multiprocessor Operating Systems<br/>16 Multiprocessor System Architectures<br/>16.1 Introduction<br/>16.2 Motivations for Multiprocessor Systems<br/>16.3 Basic Multiprocessor System Architectures<br/>16.3.1 Tightly Coupled versus Loosely Coupled Systems<br/>16.3.2 UMA versus NUMA versus NORMA Architectures<br/>16.4 Interconnection Networks for Multiprocessor Systems<br/>16.4.1 Bus<br/>16.4.2 Cross-Bar Switch<br/>16.4.3 Multistage Interconnection Network<br/>16.5 Caching<br/>16.5.1 The Cache Coherency Problem<br/>16.6 Hypercube Architectures<br/>16.7 Summary<br/>16.8 Further Reading<br/>References<br/>17 Multiprocessor Operating Systems<br/>17.1 Introduction<br/>17.2 Structures of Multiprocessor Operating Systems<br/>17.3 Operating System Design Issues<br/>17.4 Threads<br/>17.4.1 User-Level Threads<br/>17.4.2 Kernel-Level Threads<br/>17.4.3 First-Class Threads<br/>17.4.4 Scheduler Activations<br/>17.5 Process Synchronization<br/>17.5.1 Issues in Process Synchronization<br/>17.5.2 The Test-and-Set Instruction<br/>17.5.3 The Swap Instruction<br/>17.5.4 The Fetch-and-Add Instruction of the Ultracomputer<br/>17.5.5 SLIC Chip of the Sequent<br/>17.5.6 Implementation of Process Wait<br/>17.5.7 The Compare-and-Swap Instruction<br/>17.6 Processor Scheduling<br/>17.6.1 Issues in Processor Scheduling<br/>17.6.2 Coscheduling of the Medusa OS<br/>17.6.3 Smart Scheduling<br/>17.6.4 Scheduling in the NYU Ultracomputer<br/>17.6.5 Affinity Based Scheduling<br/>17.6.6 Scheduling in the Mach Operating System<br/>17.7 Memory Management: The Mach Operating System<br/>17.7.1 Design Issues<br/>17.7.2 The Mach Kernel<br/>17.7.3 Task Address Space<br/>17.7.4 Memory Protection<br/>17.7.5 Machine Independence<br/>17.7.6 Memory Sharing<br/>17.7.7 Efficiency Considerations<br/>17.7.8 Implementation: Data Structures and Algorithms<br/>17.7.9 Sharing of Memory Objects<br/>17.8 Reliability/Fault Tolerance: The Sequoia System<br/>17.8.1 Design Issues<br/>17.8.2 The Sequoia Architecture<br/>17.8.3 Fault Detection<br/>17.8.4 Fault Recovery<br/>17.9 Sununary 17.10<br/><br/>Further Reading<br/>Problems<br/>References<br/>Part Vn Database Operating Systems<br/>18 Introduction to Database Operating Systems<br/>18.1 Introduction<br/>18.2 What Is Different?<br/>18.3 Requirements of a Database Operating System<br/>18.4 Further Reading<br/>References<br/>19 Concurrency Control: Theoretical Aspects<br/>19.1 Introduction<br/>19.2 Database Systems<br/>19.2.1 Transactions<br/>19.2.2 Conflicts<br/>19.2.3 Transaction Processing<br/>19.3 A Concurrency Control Model of Database Systems<br/>19.4 The Problem of Concurrency Control<br/>19.4.1 Inconsistent Retrieval<br/>19.4.2 Inconsistent Update<br/>19.5 Serializability Theory<br/>19.5.1 Logs<br/>19.5.2 Serial Logs<br/>19.5.3 Log Equivalence<br/>19.5.4 Serializable Logs<br/>19.5.5 The Serializability Theorem<br/>19.6 Distributed Database Systems<br/>19.6.1 Transaction Processing Model<br/>19.6.2 Serializability Condition in DDES<br/>19.6.3 Data Replication<br/>19.6.4 Complications due to Data Replication<br/>19.6.5 Fully-Replicated Database Systems<br/>19.7 Sununary<br/>19.8 Further Reading<br/>Problems<br/>References<br/>20 Concurrency Control Algorithms<br/>20.1 Introduction<br/>20.2 Basic Synchronization Primitives<br/>20.2.1 Locks<br/>20.2.2 Timestamps<br/>20.3 Lock Based Algorithms<br/>20.3.1 Static Locking<br/>20.3.2 Two-Phase Locking (2PL)<br/>20.3.3 Problems with 2PL: Price for Higher Concurrency<br/>20.3.4 2PL in DDES<br/>20.3.5 Timestamp-Based Locking<br/>20.3.6 Non-Two-Phase Locking<br/>20.4 Timestamp Based Algorithms<br/>20.4.1 Basic Timestamp Ordering Algorithm<br/>20.4.2 Thomas Write Rule (TWR)<br/>20.4.3 Multiversion Timestamp Ordering Algorithm<br/>20.4.4 Conservative Timestamp Ordering Algorithm<br/>20.5 Optimistic Algorithms<br/>20.5.1 Kung-Robinson Algorithm<br/>20.6 Concurrency Control Algorithms: Data Replication<br/>20.6.1 Completely Centralized Algorithm<br/>20.6.2 Centralized Locking Algorithm<br/>20.6.3 INGRES' Primary-Site Locking Algorithm<br/>20.6.4 Two-Phase Locking Algorithm<br/>20.7 Summary<br/>20.8 Further Reading<br/>Problems<br/>References |
650 #0 - SUBJECT | |
Keyword | Operating Systems (Computers) |
942 ## - ADDED ENTRY ELEMENTS (KOHA) | |
Koha item type | General Books |
Withdrawn status | Lost status | Damaged status | Not for loan | Home library | Current library | Shelving location | Date acquired | Full call number | Accession number | Date last seen | Koha item type |
---|---|---|---|---|---|---|---|---|---|---|---|
Central Library, Sikkim University | Central Library, Sikkim University | General Book Section | 14/06/2016 | 005.23 | P21223 | 14/06/2016 | General Books |