OS Review
Ch6, 7 Process Synchronization
What is race condition?
-
Ch6,7 Page 4
-
Race condition:
- several processes access and manipulate the same data concurrently
- outcome depends on which order each access takes place.
-
竞态条件(race condition):多个进程并发访问和操作统一数据并且执行结果与特定的访问顺序有关。
-
为了避免竞争条件,需要确保一段时间内只有一个进程能操作变量,要求进行一定形式的进程同步
因此需要进程同步(process synchronization) 和协调(coordination)
What is critical section problem?
-
Ch6,7 Page 7
-
Critical section problem is to design a protocol (协议) that the processes can use to cooperate without encountering race conditions.
-
What are three requirements for a solution to solve the critical problem?
- Three requirements:
- Mutual Exclusion: Only one process can execute its critical section at any given time, preventing concurrent access and race conditions.
- Progress: If no process is currently executing its critical section and there are processes requesting to enter their critical sections, then at least one of those waiting processes should eventually be allowed to enter its critical section.
- Bounded Waiting: There should be a limit on the number of times that other processes can enter their critical sections after a process has requested entry. This ensures that every process eventually gets a chance to enter its critical section and avoids starvation.
- 临界区问题的三个要求:
- 互斥 mutual exclusion: 当一个进程进入临界区,那么其他进程不能再临界区内运行。(无空等待)
- 前进 progress: 如果没有进程再其临界区内执行且有进程需要进入临界区,那么只有哪些不在剩余区内执行的进程可以参加选择,确定谁能进入临界区,而且这种选择不能无限推迟(有空让进)
- 有限等待 bounded waiting: 从一个进程做出进入临界区的请求,直到道该请求允许为止,其他进程允许进入临界区的次数有限。
- Three requirements:
What methods can be used to solve the critical section problems?
-
[Source] Some of the software-based solutions for critical section problems are Peterson’s solution, semaphores, monitors. Some of the hardware-based solutions for the critical section problem involve atomic instructions such as TestAndSet,compare and swap, Unlock and Lock.
-
[GPT-4] Methods to solve the critical section problems: Several methods can be used to solve the critical section problem, including:
- Lock Variables: Using lock variables, a shared variable that indicates if a resource is available or locked. This method can lead to busy waiting and is not recommended in most cases.
- Semaphores: Semaphores are synchronization primitives used to manage access to shared resources. They can be thought of as counters that can be incremented or decremented using atomic operations (wait and signal). Semaphores can be used to solve the critical section problem by enforcing mutual exclusion, ensuring progress, and enforcing bounded waiting.
- Mutex locks: Mutex (short for “mutual exclusion”) locks are another synchronization primitive that can be used to enforce mutual exclusion and protect critical sections. Mutex locks are like binary semaphores, but they can only be unlocked by the process that locked them, promoting proper locking and unlocking discipline.
-
[GPT-3.5] Various methods can be used to solve the critical section problem, such as:
-
Locks and Mutexes: These are synchronization primitives that allow only one thread or process to acquire the lock at a time. A thread/process acquiring the lock gains exclusive access to the critical section.
-
Semaphores: Semaphores can be used to control access to shared resources. They can have a count or value associated with them, and processes/threads can request or release the semaphore to enter or exit the critical section.
-
Monitors: Monitors are high-level synchronization constructs that encapsulate shared data and provide methods or procedures for accessing and modifying the data. Monitors ensure mutual exclusion by allowing only one thread/process to execute a monitor procedure at a time.
-
Atomic Operations: Some programming languages provide atomic operations that can be used to perform certain operations atomically, without interference from other threads or processes.
补充:这里例如
test_and_set
以及compare_and_swap
,这俩没有 Bounded Waiting,而是 Busy-waiting (in entry section, use while statement),not good
-
What are the problems if we incorrectly use the semaphore?
- Ch6,7 pp. 62:
- Deadlock and starvation are possible.
- Can cause starvation and priority inversion
- Starvation – indefinite blocking
- A process may never be removed from the semaphore queue in which it is suspended)
- Priority Inversion – Scheduling problem when lower-priority process holds a lock needed by higher-priority process
- Starvation – indefinite blocking
- Incorrect use of semaphore operations:
- signal (mutex) …. wait (mutex)
- wait (mutex) … wait (mutex)
- Omitting of wait (mutex) or signal (mutex) (or both)
- It is the responsibility of the application programmer to use semaphores correctly.
- [GPT-4] Problems if we incorrectly use the semaphore: Incorrect use of semaphores can lead to several issues, including:
- Deadlock: If two or more processes are waiting for each other’s signals to release resources, a deadlock can occur.
- Starvation: If some processes are continuously denied access to shared resources, they may never get a chance to execute their critical sections, resulting in starvation.
- Priority Inversion: When a lower-priority process holds a resource needed by a higher-priority process, leading to a temporary inversion of priorities.
- Solution: Use priority-inheritance protocol
What can semaphore help in process synchronization?
- [GPT-4] Semaphores can be used to enforce mutual exclusion, ensure progress, and enforce bounded waiting, thereby solving the critical section problem. They can also be used to synchronize the execution of processes and manage access to shared resources in concurrent systems, preventing race conditions and promoting correct program execution.
Monitor is not included in the exam.
Ch9 Main Memory
What is memory management?
- bind a logical address space to a separate physical address space (CORE of Ch9)
When does the address binding happen?
- Ch9. pp.9: address binding determined by types of systems
- Compile time: Absolute address is used in code
- Example: embedded systems with no kernel
- Disadvantage: code must be re-compiled if starting location in memory changes
- Load time: Relocatable code is generated if memory location is not known at compile time
- Program’s code has a relocation table of relative addresses of all the things in the code
- The real memory address is calculated according to the given location.
- Example: very old multiprogramming systems
- Disadvantage: the address in memory is fixed after the program is loaded into memory.
- Execution time(runtime time): Binding is delayed until run time.
- Example: most computers today
- A process can be moved during its execution from one memory segment to another
- Need hardware support for address maps
- Compile time: Absolute address is used in code
- [GPT-4] Address binding can happen at various points during the program’s lifecycle, such as:
- Compile time: If memory locations are known beforehand, the absolute code can be generated.
- Load time: If not known at compile time, the compiler generates relocatable code, and the address is determined during loading.
- Execution time: If the process can be moved while executing, binding must be delayed until run time.
What is physical memory address?
- Ch9 pp.11:
- all the addresses used by the memory hardware
- Addressed in bytes
- Size of space depends on real size of physical memory
- all the addresses used by the memory hardware
- [GPT-4] A physical memory address refers to the actual location in the memory hardware where the data is stored
What is logical memory address?
- Ch9 pp. 11: Logical address
- generated by the Program/CPU
- a.k.a. virtual address
- all the addresses used by the CPU
- Addressed in bytes
- Size of space depends on the CPU bits
- [GPT-4] A logical memory address (also known as a virtual memory address) is generated by the program and is used as a reference to access a memory location. It is then translated to the actual physical memory address by the memory management unit (MMU).
In which situation, the above two addresses are same? In which situation, the above two addresses are different?
- Ch9 pp.12:
- SAME: compile-time address binding, and load-time address-binding schemes
- DIFFERENT: execution-time address-binding scheme
- [GPT-4] (makes perfect sence I think) In some situations, such as the execution of a simple, single-program operating system, the logical and physical memory addresses may be the same. However, in a multi-programmed or virtual memory environment, the logical and physical memory addresses are typically different, as the MMU handles translation to allow for memory abstraction and protection.
- Combination: 可以看第二个问题的 example 部分,由于现代计算机都是 execution-time 地址绑定,从内存绑定角度来讲,execution-time 二者(logical addr和physical addr)并不一致,因此 multi-programmed or virtual memory environment 也可以作为该问题的相对具体的答案。
What memory allocation methods are there? Advantages? Disadvantages?
- Memory allocation methods include contiguous allocation, paging, and segmentation.
- Advantages: Contiguous allocation is simple; paging and segmentation offer more flexibility and efficient memory use.
- Disadvantages: Contiguous allocation is prone to fragmentation and inefficient use; paging and segmentation can be more complex and require additional hardware support.
How to map the logical address to physical address?
- [GPT-4] To map a logical address to a physical address, the system uses a combination of hardware (MMU) and software (operating system) to perform the translation. In paging, for example, the logical address is divided into a page number and an offset within the page, which are used to locate the corresponding physical address.
- Ch9 pp. 26: How a logical address maps to a physical address?
- (分页) Pages divide logical memory into blocks of same size called pages
- Page size = frame size
- Ans: A page table is needed to translate logical to physical addresses
- Ch9 pp. 27: Address Translation Scheme: Page Table
- Given a logical address space $2^m$ and page (frame) size $2^n$ , there are $\frac{2^m}{2^n} = 2^{m-n}$ pages in a process’s logical address space
- Page number (p) – used as an index into a page table which contains base address (frame number) of each page in physical memory
- Page offset (d) – combined with base address (frame number) to define the physical memory address that is sent to the memory unit
How to reduce time to find physical address?
- Ch9 pp.35: Solution: TLB(Translation Look-aside Buffer)
- also called associative memory
- hardware cache inside the MMU.
- 旁路翻译缓冲: TLB,这属于硬件支持
- 有效内存访问时间(effective memory-access time)
- $EMAT=(T_{m}+\epsilon)\alpha+(2T_{m}+\epsilon)(1-\alpha)$
- $T_{m}$:单次内存访问时间
- $\epsilon$:单次 TLB 访问时间
- $\alpha$:TLB 命中概率。TLB 命中时一次内存访问,不命中时两次内存访问。
- Ch9 pp.39: Effective memory access time
- $EMAT = α(c+m)+(1-α)(c+2m)$
- Hit ratio $α$ is percentage of times that a page number is found in the TLB
- $c$: TLB access time, $m$: memory access time
- An example
- α: 80%, c: 10 nanoseconds, m: 90 nanoseconds
- EAT = 0.80 x (10+90) + 0.20 x (10+90 x2) = 118 ns
- 18% slowdown in access time
- If α: 99%
- EAT = 0.99 x (10+90) + 0.01 x (10+90 x2) = 100.9ns
- only 0.9% slowdown in access time
- [GPT-4] To reduce the time to find the physical address, you can use techniques such as caching the most frequently used translations in a translation look-aside buffer (TLB). This reduces the need to access the page table or segment table, speeding up the translation process.
What methods are used to solve the big page table problem?
-
Ch9 pp.41: Memory structures for paging can get huge, Solutions:
-
Hierarchical Paging(分层页表)
- 对于非常大的页表,可以将其划分为更小部分
-
Hashed Page Tables
- 变种: 群集页表(clustered page table), 适合 64 位,对于稀疏地址空间特别有用
-
Inverted Page Tables
- 虽然这种方法减少了存储每个页表所需的内存空间,但是它增加了由于引用页而查找页表所需的时间。(时间换空间)
-
-
[GPT-4] To solve the big page table problem, several methods can be used:
- Multi-level paging: Breaks the page table into smaller, more manageable pieces, reducing the size and overhead.
- Inverted page table: Uses a single, global page table that contains an entry for each physical frame rather than each virtual page, thus reducing the table size.
- Translation look-aside buffer (TLB): Caches frequently used translations, reducing the need to access the page table and improving performance.
Example: The Intel 32 and 64-bit Architectures is not included in exam
Ch10 Virtual Memory
What is swapping?
- Ch10 pp. 5: A complete process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution.
- Advantage: Increase multiprogramming
- Disadvantage: Context switch time can then be very high
- [GPT-4] Swapping is the process of moving a process, or parts of it, between the main memory and a secondary storage device (like a hard drive) for efficient memory management. It is performed when memory space is inadequate for all processes, and operating systems use it to make better use of available memory resources.
What is backing store?
- Ch10 pp. 4: Backing store(后备存储器) is fast hard disk large enough to accommodate copies of all process memory images for all users, has direct access to memory images
- [GPT-4] A backing store is a secondary storage device, such as a hard disk or solid-state drive, used to store copies of the contents in primary memory (RAM). When a process needs more memory than what is currently available, the operating system may use the backing store to temporarily store some sections of the process, freeing up main memory for other active processes.
What is page demanding?
- Ch10 pp. 10: Demand paging: Could bring a page into memory only when it is needed
- 按需调页:仅在需要的时候才加载页面。
- Hardware support: MMU
- [GPT-4] Demand paging is a memory-management technique where a page of memory is only loaded into the main memory when it is demanded, i.e., when a process needs to access it. This mechanism allows the execution of processes without requiring all of their pages in memory at once, leading to more efficient use of memory resources.
What is page fault?
- Ch10 pp. 14: Page fault caused when a process makes a reference to a page which is not in RAM, MMU detects that the page is marked as invalid in the process’s page table.
- [GPT-4] A page fault occurs when a process tries to access a portion of its address space that is not currently mapped into physical memory. This usually triggers the operating system to load the requested page from secondary storage into the main memory so that the process can continue execution.
What is effective access time?
-
REFERER: Ch9 pp.39: Effective memory access time
-
Ch10 pp. 17: $EAT= (1 - p) * memory_access_time + p * page_fault_time$
- Page Fault Rate $p$: 0~1
- To achieve small $p$ => locality of reference 局部引用
- Example:
memory_access_time
= 200 ns,page_fault_time
= 8 ms- EAT = (1 – p) x 200 + p * 8 ms = (1 – p) x 200 + p x 8,000,000 = 200 + p x 7,999,800
- i.e., p = 0.001, EAT ≈ 200 + 0.001 x 7,999,800 ≈ 8.2 ms
- Page Fault Rate $p$: 0~1
-
[GPT-4] Effective Access Time (EAT) refers to the average time needed to access a piece of data in the system, considering the time spent in accessing the memory and the time incurred by possible page faults. EAT comprises memory access time, page fault rate, and the time needed to service a page fault.
What is locality of reference?
- Reference of Locality: The tendency of processes to reference memory in patterns rather than randomly. Results in reasonable performance from demand paging
- 局部引用:使得按需调页较为合理
How to manage the free frames?
我他妈不知道你他妈把CMU的Slides搬过来把这部分的内容删了是什么意思
- CMU 2.01 Most operating systems maintain a free-frame list – a pool of free frames for satisfying such requests.
How to determine the pages to be replaced?
- [GPT-4] Page replacement algorithms are used to determine which pages should be replaced when memory becomes full. Some common algorithms include the Least Recently Used (LRU), First-In-First-Out (FIFO), and the Optimal Page Replacement Algorithm.
How can the thrashing happen?
- [GPT-4] Thrashing occurs when a system spends more time swapping pages in and out of memory rather than executing processes. This happens when there are too many processes competing for limited memory resources, causing the system to constantly swap pages, leading to a dramatic decrease in overall performance.
- Ch10 pp. 49: If a process does not have “enough” frames in memory, the page-fault rate is very high, leads to replacing page frequently and finally Thrashing.
- A process is busy swapping pages in and out, instead of doing useful work.
How to allocate frames?
- [GPT-4] Frame allocation can be done using various policies, such as equal allocation, proportional allocation, or priority-based allocation. These approaches assign a certain number of frames to each process based on factors like fairness, memory requirements, or process priority.
- Ch10 pp. 47: Two major allocation schemes
- Fixed allocation
- Equal allocation
- Propositional allocation
- Priority allocation
- Fixed allocation
How to allocate kernel memory?
- [GPT-4] Kernel memory allocation usually involves dividing the memory into fixed-sized or variable-sized blocks, using data structures such as buddy systems, slab allocators, or specialized allocators for certain kernel structures. The kernel memory manager handles the allocation and deallocation of these blocks as needed by the kernel and applications.
Ch13 File System Interface
What is file?
- Ch13 pp. 4: A file is
- a named collection of related information that is recorded on secondary storage (e.g. Disk, SSD, Flash, etc.)
- a sequence of bits, bytes, lines, or records, the meaning of which is defined by the file’s creator
- smallest logical storage unit for user view
- [GPT-4] A file is a collection of data that is stored on a computer or other storage device. It typically represents a single unit of information, such as a document, image, audio, or video file, and is identified by a unique filename.
What is file system?
-
Ch13 pp. 4: File system resides on secondary storage (disks)
- Provides user interface to storage, mapping logical to physical
- Provides efficient and convenient access to disk by allowing data to be stored, located, retrieved easily
-
[GPT-4] A file system is a method for storing, organizing, and managing files and directories on a storage device, such as a hard drive, solid-state drive, or external storage media. The file system controls how data is stored, accessed, and retrieved, providing a structure and set of rules for managing data on the device.
What basic operations can be performed on a file?
- Ch13 pp. 7: Six basic file operations;
- Create a file
- Write a file
- Read a file
- Reposition within a file
- Truncate a file
- Delete a file
- Except file creation and deletion, all other operations need to open the file first and then close the file after the operation
What basic operations can be performed in a directory?
- Ch13 pp. 16 Basic operations
- Search for a file
- Create a file
- Delete a file
- List a directory
- Rename a file
- Traverse the file system
What structures can be used to organize directories?
- Ch13 pp. 17 Directory organization
- Single level directory
- Two-level directory
- Tree-structured directories
- Acyclic graph directories
How can a file system be mounted?
- [GPT-4] A file system can be mounted by attaching it to an existing directory structure, making its content accessible within the larger system. This involves specifying the device to be mounted, the mount point (an empty directory where the file system will be attached), and potentially additional options such as file system type or mount options.
- Ch13 pp. 21: A file system must be mounted before it can be accessed, an unmounted file system can be mounted at a mount point, the amount point must be an empty directory, i.e., original file system at the mount point must be removed if exists
How can the access to a file be controlled?
-
[GPT-4] Access to a file can be controlled by assigning permissions and ownership to the file. File permissions typically include read, write, and execute permissions for different categories of users (owner, group, and others). Users can be granted or denied access to a file based on these permissions and their relationship to the file’s owner and group. Access control lists (ACLs) can also be used to provide finer control over access permissions for specific users or groups.
-
Ch13 pp. 25: Each file and directory has an access-control list (ACL)
File locking is not included in the exam.
Ch14 File System Implementation
Why a file system is organized in layers?
- [GPT-4] makes the design easier to understand and maintain, allows improvements and modifications to be made in one layer without affecting the other layers, creating a consistent and standard interface between the layers, making the system more modular and scalable
- Ch14 pp.5: Each layer in the design uses the features of lower layers to create new features.
What are two categories of structures used in file system implementation?
- Ch14 pp.7: Two categories of structures
- on-disk (on-storage) structures
- in-memory structures
What is File Control Block or inode?
-
[GPT-4] A File Control Block (FCB) or inode is a data structure used to store information about a file, such as its size, creation time, modification time, owner, permissions, and location of the file’s data blocks on the disk. The inode serves as a unique identifier for the file within the file system, allowing the file system to manage the file’s metadata and access its content efficiently.
-
Ch14 pp.8: At On-disk (On-storage) Structures => Directory structure
- UFS (Unix): includes file names and associated inode (same as File Control Blocks) numbers
- NTFS(Windows): File Control Blocks (FCB), called master file table
What open file tables are there in an operating system? What are they used for?
- Ch14 pp. 9: At In-Memory Structures
- system-wide open-file table
- per-process open-file table
- [GPT-4] There are two types of open file tables in an operating system:
- System-wide open file table: This table contains information about all the open files in the system, such as the reference count, file permissions, and file offset.
- Per-process open file table: This table contains information specific to each process, such as the file descriptor and any process-specific flags or settings.
What methods are used to allocate disk blocks to files?
- Ch14 pp. 15: How disk blocks are allocated for files
- Contiguous allocation(not common now)
- Linked allocation (e.g. FAT32 in Windows)
- Indexed allocation (e.g. EXT3 in Unix)
- [GPT-4] The methods used to allocate disk blocks to files are:
- Contiguous allocation: All the blocks for a file are allocated together in a continuous sequence of disk blocks. This method enables fast access but may cause fragmentation.
- Linked allocation: Each file is a linked list of disk blocks, with each block containing a pointer to the next block. This method reduces fragmentation but can result in slower access times.
- Indexed allocation: An index block is used to store pointers to the file’s data blocks. This method allows for faster access but may consume more space for the index blocks.
What are main methods used to manage free space on disk?
-
Ch14 pp. 31: Approaches
- Bit vector (map)
- Linked list
- Grouping
- Counting
-
[GPT-4] The main methods used to manage free space on a disk are:
-
Bitmaps or Free Space List: A data structure is used to represent the free and allocated blocks on the disk. Each bit in the bitmap or each entry in the free space list corresponds to a specific block on the disk, allowing for efficient tracking and allocation of free blocks.
-
Linked list: Free blocks are stored in a linked list, where each block contains a pointer to the next free block. This method uses minimal additional space but can be slower for allocation and deallocation.
-
Grouping: The free space is divided into groups, and a free list is maintained for each group. This method can improve allocation efficiency and reduce fragmentation.
-
Virtual File System, Schemes in handling large files, Performance are not included in the exam.