๐Ÿšจ Limited Offer: First 50 users get 500 credits for free โ€” only ... spots left!
Operating Systems Flashcards

Free Operating Systems flashcards, exportable to Notion

Learn faster with 46 Operating Systems flashcards. One-click export to Notion.

Learn fast, memorize everything, master Operating Systems. No credit card required.

Want to create flashcards from your own textbooks and notes?

Let AI create automatically flashcards from your own textbooks and notes. Upload your PDF, select the pages you want to memorize fast, and let AI do the rest. One-click export to Notion.

Create Flashcards from my PDFs

Operating Systems

46 flashcards

A process is an instance of a computer program that is being executed. It is a unit of activity characterized by the execution of a sequence of instructions, use of resources like CPU time, memory, files, and I/O devices.
The main components of a process are: the program code, program data, stack, heap, and process control block (PCB).
A process control block is a data structure maintained by the operating system that contains information about a process, including its state, program counter, CPU registers, and memory management information.
The typical process states are: new, ready, running, waiting, terminated. A process can transition between these states based on events like process creation, scheduling decisions, waiting for I/O or resources, and process completion.
Process scheduling is the activity of the process manager that handles the removal of the running process from the CPU and the selection of another process on the ready queue to be executed.
The main process scheduling algorithms are: First-Come, First-Served (FCFS), Shortest-Job-First (SJF), Priority Scheduling, Round Robin (RR), and Multi-level Queue Scheduling.
Memory management is the functionality of an operating system that handles or manages primary memory and moves processes back and forth between main memory and disk during execution.
The main memory management techniques are: paging, segmentation, and virtual memory.
Paging is a memory management technique in which processes are divided into fixed-size pages, with pages being loaded into any available space in main memory and their locations being tracked by page tables.
Segmentation is a memory management technique in which a process is divided into a number of segments, with each segment being a logically coherent portion of the process, such as code, data, or stack.
Virtual memory is a technique that allows the execution of processes that are not completely in memory by mapping virtual memory addresses used by a program to physical memory addresses.
A file system is a method for storing and organizing computer files and the data they contain to make it easy to find and access them.
The main components of a file system are: files, directories/folders, file metadata, file access permissions, and file organization on storage devices.
A directory, also called a folder, is a container that holds files or other directories. It helps organize files into a hierarchical structure.
File metadata is data about a file, such as its name, size, creation and modification dates, and access permissions.
File access permissions define what operations (read, write, execute) are allowed for a particular file, for the file's owner, group, and all other users.
Concurrency refers to the ability of different parts or units of a program, algorithm, or problem to be executed out-of-order or in partial order, without affecting the final outcome.
Concurrency is about dealing with multiple tasks or processes at once, while parallelism is about performing multiple tasks simultaneously.
A race condition is an undesirable situation that occurs when two or more threads or processes access a shared resource concurrently and the final result depends on the relative timing of their execution sequences, causing data corruption or inconsistencies.
A deadlock is a situation where two or more competing processes are each waiting for the other to release one or more resources that they need to proceed, causing all of them to hang indefinitely.
Semaphores are a synchronization tool used to coordinate access to shared resources by multiple processes or threads in a concurrent system, preventing race conditions and deadlocks.
Monitors are a high-level abstraction that encapsulate shared variables and the operations that access them, enforcing mutual exclusion automatically to prevent data races and other concurrency issues.
A critical section is a part of a program that accesses shared resources and must not be executed concurrently by multiple threads or processes to prevent race conditions.
Mutual exclusion is a mechanism to ensure that only one thread or process can access a shared resource or enter a critical section at a time.
A process synchronization problem arises when multiple processes or threads need access to shared resources in a specific order to ensure correct execution and prevent race conditions or deadlocks.
The classic process synchronization problems include the bounded-buffer problem, the readers-writers problem, the dining philosophers problem, and the sleeping barber problem.
Multithreading is a concurrent execution model that allows a single process to have multiple threads of execution, enabling concurrent operations within the same memory space.
The benefits of multithreading include increased responsiveness, efficient resource utilization, simplified modeling of concurrent tasks, and the ability to exploit parallelism on multi-processor systems.
A thread is a path of execution within a process, with its own program counter, registers, and stack space, allowing multiple streams of execution to run concurrently within the same process.
Processes are isolated from each other and have separate address spaces, while threads within a process share the process's resources, including memory and file handles.
Thread scheduling is the mechanism by which the operating system kernel allocates CPU time to different threads, either by time-slicing or by prioritizing threads based on their relative priorities.
Thread synchronization refers to techniques used to coordinate the execution of multiple threads to ensure data integrity and proper control flow when accessing shared resources or executing critical sections.
Spinlocks are a locking mechanism used in multi-threaded environments where a thread repeatedly checks or "spins" on a lock variable until it acquires the lock, enabling mutual exclusion for accessing a critical section.
A mutex (mutual exclusion object) is a synchronization primitive that enforces a locking mechanism to ensure that only one thread can access a shared resource or critical section at a time.
A condition variable is a synchronization primitive that enables threads to pause their execution (go to sleep) until a particular condition becomes true, allowing threads to signal and wait upon conditions for shared resource access.
The producer-consumer problem is a classic example of a multi-process synchronization problem, where one process produces data items and places them in a buffer, while another process consumes the items from the buffer.
Thrashing is a situation in virtual memory systems where the system spends excessive time swapping pages between memory and disk, leading to severe performance degradation due to the high cost of disk operations.
Memory fragmentation is a condition where memory is used inefficiently, leaving areas of free space that are too small to be allocated for use, ultimately causing program failure due to lack of available memory.
Disk scheduling is the mechanism used by an operating system to schedule I/O requests arriving for a disk, with the goal of improving overall system throughput and access times.
Some common disk scheduling algorithms include First-Come First-Served (FCFS), Shortest Seek Time First (SSTF), SCAN, C-SCAN, LOOK, and C-LOOK.
Demand paging is a virtual memory management technique where pages are loaded into memory from disk only when they are referenced or demanded during program execution, as opposed to loading all pages at the start.
Page replacement is a technique used in virtual memory systems to decide which pages in main memory should be removed or swapped out to disk when new pages need to be loaded due to limited available memory.
Common page replacement algorithms include First-In First-Out (FIFO), Least Recently Used (LRU), Optimal, and Least Frequently Used (LFU).
Swapping is a mechanism in virtual memory systems where an entire process is temporarily moved to disk when it needs to free up space in main memory for other processes, later bringing it back into memory for execution.
Disk arm scheduling refers to the algorithms and techniques used to determine the order in which disk I/O requests are serviced, with the goal of minimizing the movement of the disk arm and improving overall disk performance.
A layered file system is a file system architecture that separates file system functions into modular layers, with each layer providing a specific abstraction for the layer above it, improving flexibility, scalability, and security.