OS Tech Interview Preparation Notes

OS Tech Interview Preparation Notes

What is an Operating System?

Operating System is software that acts as an interface between the User and the Hardware, It helps in managing the resources of the system and provides an environment for the user to execute its programs conveniently and efficiently by hiding the underlying complexity of the system

Services of Operating System

The functions of an operating system can be grouped into several categories:

  1. Process Management:

    • Process scheduling: Allocating CPU time to processes and determining the order of execution.

    • Process creation and termination: Creating new processes and terminating existing ones.

    • Process synchronization: Managing synchronization and communication between processes.

    • Process communication: Facilitating inter-process communication and data exchange.

  2. Memory Management:

    • Memory allocation: Assigning memory to processes and managing memory allocation.

    • Memory protection: Protecting memory regions to prevent unauthorized access.

    • Virtual memory management: Utilizing secondary storage as an extension of primary memory, allowing for larger address spaces.

  3. File System Management:

    • File creation, deletion, and manipulation: Creating, deleting, and managing files.

    • Directory and file organization: Organizing files and directories in a hierarchical structure.

    • File access control: Enforcing file access permissions and security.

    • File system optimization: Optimizing file access and storage to enhance performance.

  4. Device Management:

    • Device drivers: Providing drivers to communicate with hardware devices.

    • Device allocation: Allocating devices to processes and managing device access.

    • Input/output control: Handling input/output operations between devices and processes.

    • Device synchronization: Managing concurrent access to devices to prevent conflicts.

  5. User Interface:

    • Command-line interface (CLI): Providing a text-based interface for users to interact with the system through commands.

    • Graphical user interface (GUI): Offering a visual interface with icons, windows, and menus for user interaction.

    • System Utilities: Providing tools and utilities for managing system configuration, settings, and applications.

  6. Network Management:

    • Network protocols: Supporting network communication protocols for data transfer.

    • Network configuration: Configuring network settings, IP addresses, and connectivity.

    • Network resource sharing: Enabling sharing of files, printers, and other network resources.

    • Network security: Implementing security measures to protect networked systems and data.

These functions collectively enable an operating system to manage hardware resources, provide a user-friendly interface, facilitate application execution, and ensure system stability and security.

Types of Operating Systems

  • Single Processing OS

  • Batch Processing OS

  • Multiprogramming OS

  • Multitasking OS

  • Multiprocessing OS

  • Distributed OS

  • Real-Time OS

RAM vs ROM

RAM (Random Access Memory) and ROM (Read-Only Memory) are both types of computer memory, but they differ in their characteristics and usage. Here's a comparison between RAM and ROM:

RAM:

  1. Volatile: RAM is volatile memory, meaning its contents are lost when power is turned off or interrupted.

  2. Read and write: RAM allows both reading and writing of data, making it suitable for storing and accessing data during program execution.

  3. Temporary storage: RAM serves as temporary storage for data and program instructions that are actively used by the CPU.

  4. Faster access: RAM provides faster access to data compared to other types of storage devices.

  5. Capacity: RAM capacity can range from a few gigabytes (GB) to several terabytes (TB) in modern computers.

  6. Types: Common types of RAM include dynamic random access memory (DRAM) and static random access memory (SRAM).

ROM:

  1. ANon-volatile: ROM is non-volatile memory, meaning its contents are retained even when power is turned off.

  2. Read-only: ROM is primarily used for storing permanent or semi-permanent data that cannot be modified or overwritten.

  3. Bootstrapping: ROM is commonly used to store the computer's firmware or BIOS, which contains the initial instructions to start the computer when powered on.

  4. Firmware updates: Some types of ROM, such as EEPROM (Electrically Erasable Programmable Read-Only Memory), allow limited modification of data through specific operations or firmware updates.

  5. Slower access: ROM generally provides slower access compared to RAM due to its nature of storing non-modifiable data.

  6. Types: Different types of ROM include Mask ROM (MROM), Programmable ROM (PROM), Erasable Programmable ROM (EPROM), and Electrically Erasable Programmable ROM (EEPROM).

  7. RAM (Random Access Memory): Volatile computer memory that allows for reading and writing of data during system operation, providing temporary storage for active programs and data.

  8. ROM (Read-Only Memory): Non-volatile computer memory that contains permanent or semi-permanent data and instructions, typically used for storing firmware, system instructions, and other data that cannot be modified.

  9. SRAM (Static Random Access Memory): Fast, volatile memory that provides quick access to data but is more expensive and less dense compared to DRAM.

  10. DRAM (Dynamic Random Access Memory): Slower, volatile memory that offers higher density and lower cost compared to SRAM but requires constant refreshing

  11. PROM (Programmable Read-Only Memory): Non-volatile memory that can be programmed once and cannot be modified.

  12. EEPROM (Electrically Erasable Programmable Read-Only Memory): Non-volatile memory that can be electrically programmed and erased for occasional modifications.

  13. EPROM (Erasable Programmable Read-Only Memory): Non-volatile memory that can be programmed using specialized equipment and erased by exposure to UV light.

Virtualization vs Containerization

Virtualization:

  • Virtualization is the process of creating multiple virtual machines (VMs) or virtualized environments on a single physical server.

  • Each virtual machine operates as an independent entity with its own operating system (OS) and resources, such as CPU, memory, and storage.

  • Virtualization allows for running multiple operating systems and applications simultaneously on the same hardware, enabling better hardware utilization and isolation between VMs.

  • It provides flexibility and enables the migration of virtual machines across different physical servers.

Containerization:

  • Containerization is a lightweight form of virtualization that allows applications to run in isolated environments called containers.

  • Containers share the host operating system kernel and libraries, making them more lightweight and faster to start compared to virtual machines.

  • Containers package applications and their dependencies into a portable and consistent format, enabling easy deployment and scalability.

  • Containerization provides efficient resource utilization and allows for running multiple isolated applications on a single host.

  • It is commonly used in microservices architectures and container orchestration platforms like Docker and Kubernetes.

MBR vs GPT

  1. MBR (Master Boot Record): MBR is an older and widely supported partitioning scheme. It uses a 32-bit structure and is typically associated with BIOS-based systems. MBR allows for up to four primary partitions or three primary partitions and one extended partition containing multiple logical partitions.

  2. GPT (GUID Partition Table): GPT is a newer and more advanced partitioning scheme. It uses a 64-bit structure and is associated with UEFI-based systems. GPT supports a much larger number of partitions, with up to 128 primary partitions, and offers support for larger disk sizes beyond 2 terabytes (TB).

Basis of ComparisonMBR (Master Boot Record)GPT (GUID Partition Table)
AgeOlderNewer
Structure32-bit64-bit
Number of partitionsUp to 4 primary partitions or 3 primary partitions and 1 extended partitionUp to 128 primary partitions
Maximum disk sizeLimited to 2 terabytes (TB)Up to 9.4 zettabytes (ZB)
CompatibilityCompatible with BIOS-based and UEFI-based systemsDesigned for UEFI-based systems, can be used with BIOS systems with a compatibility layer

Compiler: A software that translates high-level programming language code into machine code.

Loader: A program that loads executable files into memory for execution.

Assembler: A program that converts assembly language code into machine code.

Interpreter: A program that directly executes instructions of a high-level programming language without prior translation.

System calls: Functions provided by the operating system that allows user programs to interact with the underlying hardware and resources.

API: A set of rules and protocols that define how software components should interact with each other.

Kernel: The core component of an operating system that manages system resources and provides essential services.

Shell: A command-line interface that allows users to interact with the operating system by executing commands.

JVM (Java Virtual Machine): A runtime environment that executes Java bytecode.

Booting: The process of starting up a computer system, including loading the operating system into memory and initializing the hardware

Monolithic vs Micro Kernel

Monolithic Architecture: In a monolithic architecture,

  1. The entire operating system is implemented as a single, large software module

  2. Here all operating system services, such as process management, file system, device drivers, and networking, reside within the kernel.

  3. This design approach allows for efficient communication between components but can lead to a lack of modularity and difficulties in maintenance and scalability.

Microkernel Architecture: In a microkernel architecture,

  1. The operating system is divided into small, independent modules,

  2. With the core functionality (e.g., process scheduling, memory management) implemented in the microkernel.

  3. Other services, such as file systems and device drivers, are implemented as separate user-level processes or servers, running outside the microkernel's protection boundaries.

  4. This approach promotes modularity, flexibility, and easier maintenance, but communication between components may be less efficient due to inter-process communication overhead.

Why Windows Kernel is Monolithic and not MicroKernel?

Windows has traditionally used a monolithic kernel architecture primarily for reasons of performance and compatibility.

  1. Performance: The monolithic architecture allows for efficient and direct communication between the operating system components since they all reside within the kernel. This direct access to kernel services can result in faster system performance and lower overhead compared to the inter-process communication overhead in a microkernel design.

  2. Compatibility: Windows has a long history of software and hardware support. By having a monolithic kernel, Windows can provide a comprehensive set of integrated services and APIs, making it easier for developers to create applications that can leverage the full range of functionalities. This approach ensures backward compatibility with a vast number of existing Windows applications and device drivers.

What happens when OS turns on?

When you turn on your computer, several steps occur in the booting process, which can vary depending on the specific computer system and operating system. Here's a general overview of the booting process:

  1. Power-On Self-Test (POST): The computer's firmware, known as the BIOS (Basic Input/Output System) or UEFI (Unified Extensible Firmware Interface), performs a self-test to check the hardware components. It ensures that the essential hardware, such as the processor, memory, and storage devices, are functioning correctly.

  2. Bootloader: Once the POST completes successfully, the BIOS or UEFI locates and loads a small program known as the bootloader. The bootloader is responsible for initiating the operating system's boot process.

  3. Operating System Initialization: The bootloader transfers control to the operating system. The operating system's kernel is loaded into memory, and the initialization process begins. This includes setting up various system components, initializing drivers, and configuring essential services.

  4. User Login/Interface: After the operating system initializes, the computer presents the user with a login screen or user interface. The user can enter their credentials to log in to the system.

  5. User Environment: Once the user logs in, the operating system loads the user environment, including the desktop, taskbar, and other user-specific settings. The computer is now ready for the user to interact with applications and perform tasks.

It's important to note that the booting process can be more complex and involve additional steps or components, depending on factors such as the presence of a network boot, disk encryption, or other system-specific configurations.

Process

Process Vs Program

Program is an executable file that contains a set of instructions to perform our job or operation in the computer system

Process: The program under execution is called Process

Different States of Process

  1. New: The process is being created or initialized. At this stage, system resources, such as memory and process control blocks, are allocated to the process.

  2. Ready: The process is prepared to execute and is waiting to be assigned to a processor. It is in the main memory, has all the necessary resources, and is waiting for its turn to be executed.

  3. Running: The process is currently being executed by the processor. It is actively using the CPU and performing its designated tasks.

  4. Waiting (Blocked): The process is waiting for a particular event or resource to become available. It could be waiting for user input, I/O completion, or a signal from another process. While waiting, the process is temporarily suspended and does not consume CPU resources.

  5. Terminated: The process has completed its execution or has been terminated for some reason. At this stage, system resources used by the process are released and its process control block is removed from the memory.

Types of Process

  1. System Processes: These are processes that are essential for the operating system to function correctly. They manage critical system tasks and services, such as memory management, process scheduling, and I/O operations. System processes often have higher privileges and operate in the background.

  2. User Processes: User processes are created and executed by users to perform specific tasks or run applications. These processes are initiated by users through interactive sessions or by executing programs. Examples include word processors, web browsers, media players, and other user applications.

  3. Foreground Processes: Foreground processes are interactive user processes that receive direct input from the user and have control of the user interface. These processes typically require user attention and may run in the foreground, allowing user interaction.

  4. Background Processes: Background processes are non-interactive processes that run without direct user interaction or control of the user interface. These processes often perform tasks that do not require immediate user attention, such as system maintenance, automated backups, or server processes running in the background.

  5. Daemon Processes: Daemon processes, also known as services or daemons, are background processes that run continuously in the system, providing specific services or functionalities. They usually start when the system boots up and perform tasks like network services, system monitoring, or automatic updates.

  6. Child Processes: Child processes are created by parent processes, typically through a process called process forking. A parent process spawns a child process to perform a specific task or as part of a larger program's execution. Child processes inherit certain characteristics and resources from their parent processes.

  7. Zombie processes are typically short-lived and are cleaned up by the parent process. When a child process completes its execution, it becomes a zombie process until the parent process acknowledges its termination by reading the process exit status. Once the parent process reads the exit status, the zombie process is removed from the system.

System processes manage critical system tasks, User processes are initiated by users, Foreground processes are interactive, Background processes run in the background, Daemon processes provide services, and Child processes are created by parent processes.

PCB In Detail

  • Process ID

  • Process State

  • Program Counter

  • CPU Registers

  • CPU Scheduling Info

  • Memory Management Info

  • I/O Status Info

  • Accounting Info

  • Parent Process ID

  • Pointers

  1. Process ID (PID): A unique identifier assigned to each process in the system, enabling the operating system to distinguish and track individual processes.

  2. Process State: Represents the current state of the process, such as running, ready, waiting, or terminated. It helps the operating system manage the execution and scheduling of processes.

  3. Program Counter (PC): Stores the memory address of the next instruction to be executed by the process.

  4. CPU Registers: Contains the values of the processor registers associated with the process, including the accumulator, stack pointer, and general-purpose registers. These registers are saved and restored during context switches.

  5. CPU Scheduling Information: Includes data related to the process's priority, scheduling algorithm used, and the amount of time the process has already executed.

  6. Memory Management Information: Stores information about the process's memory allocation, such as the base address, limit, and page tables.

  7. I/O Status Information: Keeps track of any I/O devices the process is using, including the status of I/O operations, open files, and pending I/O requests.

  8. Accounting Information: Collects data for process accounting purposes, such as CPU usage, execution time, and resources utilized by the process.

  9. Parent Process ID: Stores the PID of the parent process that created the current process. This field helps maintain a hierarchical structure of processes.

  10. Pointers: Contains pointers to various resources related to the process, such as the memory management data structures, open file table, and other process-related data.

How does the Process look like in memory

  1. Code (Instructions): Contains the executable instructions of the program.

  2. Data: Stores variables and constants used by the program.

  3. Stack: Holds temporary data and function call information.

  4. Heap: Provides dynamic memory allocation for data structures.

  5. File Descriptors: References to opened files and I/O resources.

  6. Process Control Block (PCB): Stores information about the process itself, such as state, ID and resource management details.

What is Process Scheduling?

Process scheduling refers to the method and algorithm used by the operating system to determine which processes should be allocated CPU time and in what order. It involves making decisions on which process to run when to run it, and for how long.

Different types of Queue

  1. Ready Queue: This queue holds the processes that are ready to be executed by the CPU. Processes in the ready queue are waiting for their turn to be allocated CPU time.

  2. Waiting Queue: Processes that are waiting for a specific event, such as I/O completion or a particular resource becoming available, are placed in the waiting queue. They are temporarily removed from execution until the event they are waiting for occurs.

  3. Job Queue: The job queue, also known as the long-term scheduler or batch queue, holds the incoming processes that are waiting to be admitted into the system. The long-term scheduler determines when and if these processes should be loaded into memory.

What are Schedulers

Schedulers are components of the operating system responsible for making decisions on process scheduling and managing the execution of processes. They can be categorized into different types based on the time scale they operate on.

Long-Term Scheduler:

  • Determines which processes from the job queue are admitted into the system and allocated resources.

  • Controls the degree of multiprogramming and focuses on overall system performance and resource utilization.

Medium-Term Scheduler:

  • Performs process swapping or suspension by moving processes from main memory to secondary storage.

  • Helps manage memory efficiently and makes room for other processes in the main memory.

  • Plays a role in memory management and process balancing.

Short-Term Scheduler:

  • Determines which processes from the ready queue are allocated CPU time and in what order.

  • Optimizes CPU utilization, and responsiveness, and provides fair execution of processes.

  • Makes decisions on context switches, where the CPU is assigned to a different process.

CPU Bound and I/O Bound Process

CPU Bound Process: A CPU-bound process is one that primarily requires significant computational resources from the CPU to complete its execution. These processes involve a substantial amount of calculations, data processing, or complex computations. They tend to consume more CPU time and require less interaction with input/output (I/O) operations. CPU-bound processes are typically characterized by high utilization of the CPU and may spend most of their time executing instructions rather than waiting for I/O operations to complete.

Input/Output (I/O) Bound Process: An I/O-bound process relies heavily on input/output operations, such as reading from or writing to external devices, disk drives, or network communications. These processes frequently interact with external resources and may spend a significant portion of their execution time waiting for I/O operations to complete. Compared to CPU-bound processes, I/O-bound processes tend to have shorter bursts of CPU usage and more periods of idle time while waiting for I/O operations to finish.

The best Performance system is one which has a combination of CPU Bound and I/O Bound Processes

What is meant by context-switch?

A context switch refers to the process of saving the current state of a running process and restoring the saved state of a different process, allowing it to continue execution. When a context switch occurs, the operating system switches the CPU from one process to another, transferring control of the CPU to a new process.

IPC

IPC stands for Interprocess Communication, which refers to the mechanisms and techniques used by processes to exchange information and coordinate their actions in a computer system. IPC enables communication and data sharing between processes running concurrently on the same system. Here are a few commonly used IPC mechanisms:

  1. Pipes: A pipe is a unidirectional communication channel that allows one-way data flow between a pair of related processes. One process writes data to the pipe, and the other process reads the data.

  2. Message Queues: Message queues provide a way for processes to send and receive messages. Messages are stored in a queue and can be read by the receiving process in the order they were sent.

  3. Shared Memory: Shared memory allows multiple processes to access the same region of memory. It enables efficient and fast communication by allowing processes to read and write directly to the shared memory segment.

Threads

What are threads?

Threads can be considered lightweight processes within a process. Each thread represents a single sequence of instructions with an independent path of execution, allowing multiple subtasks or operations to be performed concurrently within a process.

Multithreading

What is Multithreading?

Multithreading refers to the concurrent execution of multiple threads within a single process.

Benefits of Multithreading

  1. Concurrency: Multithreading allows for concurrent execution of multiple threads within a process. This enables tasks to be executed simultaneously, improving overall system responsiveness and resource utilization.

  2. Responsiveness: Multithreading can enhance the responsiveness of applications by allowing them to remain interactive even during resource-intensive operations. By delegating time-consuming tasks to separate threads, the main thread can continue to respond to user input and provide a smooth user experience.

  3. Parallelism: With multithreading, computationally intensive tasks can be divided into smaller units that can be executed in parallel. This can significantly improve performance by leveraging the capabilities of multi-core processors or distributed computing systems.

  4. Resource Sharing: Threads within a process share the same memory space, allowing for efficient sharing of data and resources. This eliminates the need for complex inter-process communication mechanisms and enables seamless collaboration between threads.

  5. Simplified Design: Multithreading simplifies the design and development of complex applications. By breaking down tasks into smaller threads, each responsible for a specific subtask, the application can be organized into modular and manageable components.

Example

A common example that illustrates the benefits of multithreading is a web server. When a web server receives multiple requests from different clients simultaneously, it can utilize multithreading to handle these requests concurrently.

Instead of processing each request sequentially, the web server creates a separate thread for each incoming request. These threads can run in parallel, independently handling the request and generating the appropriate responses. By doing so, the web server achieves concurrency and responsiveness, allowing multiple clients to be served simultaneously.

In this scenario, multithreading enables the webserver to handle multiple connections concurrently, providing faster response times and better overall performance. It leverages the benefits of parallelism, resource sharing, and improved responsiveness offered by multithreading.

Models

Many-to-One: In an operating system, a many-to-one relationship refers to multiple user-level threads mapped to a single kernel-level thread. In this threading model, multiple threads are managed by a user-level thread library, and all the threads share the same kernel-level thread. This model offers simplicity and efficiency but lacks true concurrency and may suffer from blocking if one thread encounters a blocking system call.

One-to-Many: In the one-to-many threading model, each user-level thread is mapped to a separate kernel-level thread. This means that multiple user-level threads can execute in parallel on different kernel-level threads. It allows for true concurrency as multiple threads can be scheduled simultaneously on multiple processors or cores.

One-to-One: In the one-to-one threading model, each user-level thread is mapped to a corresponding kernel-level thread. This provides a one-to-one relationship between user-level threads and kernel-level threads. With this model, multiple user-level threads can execute in parallel on multiple kernel-level threads, enabling true concurrency. It provides better responsiveness and the ability to fully utilize available system resources.

Many-to-One:

  • Pros: Simplicity, low overhead, efficient context switching.

  • Cons: Lack of true concurrency, potential blocking issues, limited scalability.

One-to-Many:

  • Pros: True concurrency, parallel execution, good scalability.

  • Cons: Higher overhead due to managing multiple kernel-level threads, increased complexity.

One-to-One:

  • Pros: True concurrency, parallel execution, potential for better responsiveness, scalability.

  • Cons: Higher overhead in terms of thread creation and management, increased memory usage.

Impact of Multicores on Multi-threading

  1. Increased Parallelism: Multicore processors enable true parallel execution of multiple threads, improving overall system performance.

  2. Scalability: Multicore processors allow for better scalability, distributing workload across cores for increased throughput and responsiveness.

  3. Reduced Contentions: Multicores mitigate resource contention issues by providing separate cores for executing threads, improving efficiency.

  4. Load Balancing: Multicores facilitate load balancing, distributing threads across cores to optimize resource utilization.

  5. Concurrency and Responsiveness: Multicore processors enhance concurrent execution, improving application responsiveness and enabling timely event handling.

Thread Context Switching and Process Context Switching

Thread Context SwitchingProcess Context Switching
OS saves the current state of the thread and switches to another thread of the same processOS saves the current state of a process and switches to another process by restoring its state
Doesn't includes switching of memory address space (But program counter, registers and stack are included)Includes switching of memory address space
Fast SwitchingSlow Switching
CPU's cache state is preservedCPU's cache state is flushed

Process Scheduling

What is Process Scheduling and its need

Process Scheduling is a task in OS that helps in scheduling various processes which are in a ready, waiting or running state. Process Scheduling allows OS to allocate a time interval of CPU execution for each process. Another important reason for using process scheduling is that it keeps our CPU system busy all the time. Which helps us to get a minimum response time from the process.

CPU Burst Cycle

The CPU burst cycle refers to the alternating periods of CPU activity and CPU idle time that occur during the execution of a process in a computer system. The CPU burst cycle consists of the following phases:

  1. CPU Burst: This is the period during which the process actively executes on the CPU, performing computations or executing instructions.

  2. I/O Burst: After the CPU burst, the process may need to perform input/output (I/O) operations, such as reading from or writing to a disk, network, or other external devices. During this phase, the process is waiting for the completion of the I/O operation and is considered to be in an idle state.

  3. CPU Scheduler: Once the I/O operation is completed, the process re-enters the CPU burst phase and waits for the CPU scheduler to allocate the CPU for execution.

  4. CPU Idle Time: If no other processes are waiting for the CPU, the process may continue executing on the CPU immediately after the I/O burst. However, if there are other processes in the system, the CPU may be assigned to a different process, resulting in idle time for the current process.

The CPU burst cycle repeats as long as the process requires CPU execution and encounters I/O operations or scheduling events. The length of the CPU bursts and the duration of the I/O bursts can vary greatly depending on the nature of the process and the specific operations it performs.

Preemptive and Non-Preemptive Scheduling

Preemptive scheduling allows the operating system to forcefully interrupt a running process or thread and allocate the CPU to another process or thread.

  • Advantages:

    • Responsiveness: Allows immediate execution of high-priority tasks.

    • Fairness: Ensures each process gets a fair share of CPU time.

    • Prioritization: Enables priority-based scheduling for critical tasks.

  • Disadvantages:

    • Overhead: Introduces overhead due to frequent context switches.

    • Complexity: Requires complex synchronization mechanisms.

    • Non-determinism: This may result in non-deterministic behavior.

Non-preemptive scheduling allows a running process or thread to continue execution until it voluntarily releases the CPU, enters a waiting state, or completes its execution.

  • Advantages:

    • Efficiency: Incurs less overhead and efficient resource utilization.

    • Predictability: Ensures deterministic behavior and timing.

  • Disadvantages:

    • Responsiveness: This may result in lower system responsiveness.

    • Fairness: This can lead to fairness issues in multitasking scenarios.

    • Resource Utilization: May underutilize system resources.

What is Dispatcher?

  • The dispatcher is a component responsible for managing the context switch between processes or threads.

  • Its main role is to transfer control of the CPU from the currently running process or thread to another process or thread that is ready to execute.

  • The dispatcher performs tasks such as saving the state of the currently executing process or thread, loading the state of the selected process or thread, and updating the necessary data structures to facilitate the switch.

Role of Dispatcher:

  • The dispatcher plays a crucial role in task scheduling and resource allocation within the operating system.

  • It determines which process or thread should be allocated the CPU based on scheduling policies, priorities, and other criteria.

  • It ensures fair sharing of CPU resources among processes or threads, maximizing overall system efficiency, and facilitating multitasking.

Dispatch Latency:

  • Dispatch latency refers to the time taken by the dispatcher to perform a context switch between processes or threads.

  • It represents the delay from the moment a process or thread is chosen to run by the scheduler to the moment it starts executing on the CPU.

  • Dispatch latency is influenced by various factors, including the complexity of the context switch, the efficiency of the dispatcher implementation, and the system load.

  • Minimizing dispatch latency is important to maintain system responsiveness and ensure efficient CPU utilization.

Scheduling Criteria

  1. CPU Utilization: Measures the percentage of time the CPU is actively executing tasks. High CPU utilization indicates efficient utilization of processing power.

  2. Throughput: Refers to the number of tasks completed per unit of time. Higher throughput implies more work being done in a given time, indicating better overall system performance.

  3. Turnaround Time (TAT): Measures the time taken from the submission of a task until its completion, including waiting time and execution time. Lower TAT indicates faster task completion.

  4. Waiting Time: This represents the total time a task spends waiting in the ready queue before it can start executing. Minimizing waiting time improves task responsiveness and overall system efficiency.

  5. Response Time: Measures the time it takes from task submission to the first response or output. Lower response time provides a more interactive and responsive user experience.

Scheduling Algorithm

FCFS: FCFS, or First-Come-First-Serve, is a scheduling algorithm where tasks are executed in the order they arrive. It follows a simple approach of servicing tasks based on their arrival time, with the first task arriving being the first to be executed. While it is easy to understand and implement, FCFS may suffer from high waiting times and low throughput, especially if long-running tasks arrive early, causing subsequent tasks to wait for a significant amount of time.

Tip: The one who arrives first will be handled CPU first

Problem: If the long-running tasks arrive first it may cause starvation for subsequent tasks

SJFS: SJF, or Shortest Job First, is a scheduling algorithm that prioritizes tasks based on their burst time or execution time. It selects the task with the shortest burst time and executes it first. SJF aims to minimize the average waiting time and improve throughput by giving preference to shorter tasks. However, it may suffer from a lack of fairness, as longer tasks may experience increased waiting time or starvation if shorter tasks continually arrive.

Tip: The task with the shortest burst time (execution time ) gets executed first

Problem: If there large number of short tasks then it may lead to starvation for short tasks

Priority Scheduling: Priority Scheduling is a scheduling algorithm where tasks are assigned priorities, and the task with the highest priority is executed first. It allows tasks to be prioritized based on factors such as importance, urgency, or resource requirements. Priority scheduling can be either preemptive or non-preemptive. Preemptive priority scheduling allows higher-priority tasks to interrupt lower-priority tasks, while

non-preemptive priority scheduling lets a task complete its execution before considering the next task. The CPU is not forcefully taken away from a running task by higher-priority tasks. Instead, tasks are selected and executed based on their priority, but once a task starts, it retains control of the CPU until it finishes

Priority scheduling provides flexibility in handling different task priorities but can result in lower-priority tasks experiencing longer waiting times or even starvation if not managed properly.

Round Robin: Round Robin is a popular time-sharing scheduling algorithm where tasks are assigned fixed time slices or quanta cyclically. Each task is given a small amount of time to execute, typically referred to as a time quantum. When a time quantum expires, the task is preempted, and the CPU is allocated to the next task in the queue. The preempted task is then placed back in the queue to await its turn again. Round Robin ensures fairness by providing equal opportunity for all tasks to execute and prevents a single task from monopolizing the CPU for an extended period. However, it may suffer from high context switch overhead, especially if the time quantum is too small, and it may not be suitable for tasks with different levels of urgency or varying execution times.

Starvation

Starvation occurs when a process is continually delayed or prevented from executing or accessing resources it requires for an extended period. This can happen when a process is constantly deprioritized or bypassed in favor of other processes, resulting in prolonged waiting times. Starvation can impact system performance, fairness, and overall efficiency, and it is important for scheduling algorithms to be designed in a way that prevents or mitigates the occurrence of starvation by ensuring that all processes have a fair opportunity to execute and access resources.

Aging

As we have seen low priority processes will not get a turn in CPU to execute as they have low priority compared to others, therefore the concept of aging is applied in Priority scheduling where the priority of the process is increased periodically with time so that this process also gets increased and it does not starve for a long time

Synchronization

Why is process synchronization needed?

Our today's systems are multiprogramming time sharing environment, here at one time lots of process works simultaneously and the process here is divided into two categories Cooperative and Independent process

A cooperative process is one where the execution of one process affects another because they are sharing resources, If this process is not coordinated properly it can lead to problems that's why process synchronization is essential in the operating system

A few other reasons why process synchronization is important are :

  • Mutual Exclusion: One of the main reasons for process synchronization is to achieve mutual exclusion. When multiple processes or threads access shared resources simultaneously, conflicts may occur, leading to unpredictable results or data corruption. Synchronization mechanisms such as locks, semaphores, or mutexes are used to allow only one process at a time to access the critical section, ensuring data integrity and preventing race conditions.

    In short Mutual exclusion means one process must be accessing the shared resources at one time

  • Deadlock and Starvation Prevention: Synchronization helps prevent deadlocks and starvation. Deadlock occurs when processes are stuck waiting for resources indefinitely, unable to make progress.

  • Interprocess communication: Synchronization facilitates interprocess communication (IPC) between different processes. Processes may need to exchange data or coordinate their activities, and synchronization mechanisms like message queues, shared memory, or pipes enable efficient and secure communication among processes.

  • Avoiding Race Condition: Process synchronization mechanisms help prevent race conditions by ensuring that critical sections of code, which access shared resources, are executed atomically or in a controlled manner.

    Race Condition is caused when the critical section is not managed in proper manner

Physical Address and Logical Address

Physical Address Space: The physical address space refers to the actual physical memory locations available in the hardware. It represents the physical memory chips, such as RAM (Random Access Memory), where data and instructions are stored.

Logical Address Space: The logical address space, also known as the virtual address space, is the address space that a process "sees" or uses when accessing memory. It is independent of the actual physical memory. Processes use logical addresses to reference memory locations without having to know the physical memory layout.

Mutual Exclusion and Critical Section

Mutual Exclusion: In computing, mutual exclusion is a mechanism that ensures only one process or thread can access a shared resource at any given time. It is achieved through the use of synchronization primitives such as locks, semaphores, or mutexes. When a process or thread wants to access a shared resource, it first tries to acquire the lock associated with that resource. If the lock is already held by another process, the requesting process will be blocked or put to sleep until the lock becomes available. This guarantees that only one process can access the shared resource at a time, preventing concurrent access and potential conflicts.

Critical Section: A critical section refers to a specific part of a program or code where shared resources are accessed or modified. It is a section that must be executed atomically or without interruption to maintain data consistency. When a process or thread enters a critical section, it typically acquires a lock or semaphore associated with that section. This ensures that other processes or threads are prevented from entering the same critical section simultaneously. By enforcing mutual exclusion, the critical section ensures that shared resources are accessed safely and consistently.

Critical Section Problem

The critical section problem is a classic synchronization problem in computer science. It refers to the challenge of coordinating multiple processes or threads that share a common resource or data, to ensure correct and orderly execution.

For any Synchronisation Mechanism which is trying to solve a critical section problem must satisfy the below conditions

  1. Mutual Exclusion: Any process or thread that is executing in its critical section must have exclusive access to the shared resource. This means that no other process or thread can execute in their critical sections at the same time.

  2. Progress: If no process or thread is executing in its critical section and some processes or threads wish to enter their critical sections, only those processes or threads that are not making progress towards their critical sections should be allowed to enter. This prevents a situation where a process or thread continuously blocks others from entering their critical sections.

  3. Bounded Waiting: There must be a limit on the number of times a process or thread is allowed to enter its critical section after another process or thread has made a request to enter its critical section. This ensures fairness and prevents starvation, where a process or thread is continuously denied access to its critical section.

Why are preemptive Kernels better over non-preemptive Kernels?

Preemptive kernels, also known as preemptive multitasking, have several advantages over non-preemptive kernels (also known as cooperative multitasking). Here are some reasons why preemptive kernels are often considered better:

  1. Responsiveness: Preemptive kernels offer better responsiveness and user experience. They prioritize time-critical tasks and allow for quick context switches between processes or threads. This ensures that high-priority tasks can be executed promptly, even if a lower-priority task is currently running.

  2. Resource Allocation: Preemptive kernels effectively manage system resources by enforcing fair allocation. They prevent any single process or thread from monopolizing system resources for an extended period. This helps maintain a balanced and efficient execution environment.

  3. Fault Isolation: Preemptive kernels provide better fault isolation and system stability. If a process or thread encounters an error or enters an infinite loop, a preemptive kernel can detect the issue and terminate or preempt it. This prevents the entire system from becoming unresponsive or crashing due to one faulty process.

  4. Security: Preemptive kernels enhance system security by isolating processes or threads from one another. Each process or thread operates in its own protected memory space, preventing unauthorized access or interference. Preemptive kernels also offer better protection against malicious processes that may attempt to exploit vulnerabilities.

  5. Support for Real-Time Tasks: Preemptive kernels are essential for real-time systems where timely responses are critical. They can prioritize time-sensitive tasks by interrupting lower-priority tasks. This allows for deterministic and predictable execution, making them suitable for applications like industrial control systems, robotics, and multimedia processing.

  6. Multithreading Support: Preemptive kernels efficiently handle multithreaded applications. They ensure that threads are scheduled fairly and can make progress without being blocked by other threads. This is particularly beneficial in scenarios where multiple threads need to execute concurrently, share resources, or communicate with each other.

The above points have been mentioned below in the concise readable manner

  1. Responsiveness: Preemptive kernels ensure quick context switches for better task prioritization and user experience.

  2. Resource Allocation: They prevent resource monopolization, ensuring fair allocation among processes or threads.

  3. Fault Isolation: Preemptive kernels detect and handle errors or infinite loops, enhancing system stability.

  4. Security: They provide process isolation and protection against unauthorized access or interference.

  5. Support for Real-Time Tasks: Preemptive kernels are crucial for timely responses in real-time systems.

  6. Multithreading Support: They efficiently handle concurrent execution and resource sharing among threads.

What are Semaphores?

Semaphores are a synchronization mechanism used in concurrent programming to control access to shared resources. They act as counters that help manage the number of concurrent threads or processes accessing a particular resource.

In simple terms, a semaphore is like a traffic signal with a limited number of permits. It has two fundamental operations:

  1. P (wait) operation: If a thread or process wants to access a shared resource, it attempts to decrement the semaphore count. If the count is positive, the thread can proceed, and the count is decreased. If the count is zero, indicating that the resource is currently being used, the thread is blocked until it becomes available.

  2. V (signal) operation: When a thread or process finishes using the shared resource, it performs the V operation, incrementing the semaphore count. If there were other threads blocked, waiting for the resource, the V operation allows one of them to proceed.

Semaphores can be classified into two types:

  1. Binary Semaphores: Also known as mutexes (short for mutual exclusion), binary semaphores have two states: 0 and 1. They are typically used for mutual exclusion, allowing only one thread or process at a time to access a shared resource.

  2. Counting Semaphores: Counting semaphores can have a value greater than 1, allowing a specified number of threads or processes to access a resource simultaneously. They are useful for managing resources with limited capacity, such as a fixed-size buffer.

What is Busy Waiting and Spin Lock

Busy waiting, also known as spin waiting or spinning, is a technique used in concurrent programming where a thread or process repeatedly checks for a condition to become true in a loop. During busy waiting, the thread continuously executes the loop, consuming CPU resources, until the desired condition is met.

A spin lock is a synchronization primitive that utilizes busy waiting. It is a type of lock that a thread can acquire to gain exclusive access to a shared resource. When a thread attempts to acquire a spin lock and finds that it is already held by another thread, instead of immediately blocking or yielding the CPU, the thread enters a busy waiting state. It repeatedly checks the lock's availability in a loop until it becomes free.

What is Deadlock and Starvation

Deadlock and starvation are both concurrency-related issues that can occur in multithreaded or multiprocess systems.

  1. Deadlock: Deadlock refers to a situation where two or more threads or processes are indefinitely waiting for each other to release resources, resulting in a system-wide halt. Each thread or process holds a resource that another thread or process needs to proceed, leading to a cyclic dependency. As a result, none of the threads or processes can make progress, and the system becomes deadlocked.

  2. Starvation: Starvation occurs when a thread or process is continuously denied access to resources or is unable to make progress. It happens when one or more threads or processes monopolize resources, causing other threads or processes to wait indefinitely or have significantly delayed access. Starvation can be caused by poor scheduling algorithms, unfair resource allocation policies, or priority inversion scenarios.

In both deadlock and starvation, the system's execution is impacted, and threads or processes are unable to proceed as intended. However, there are some differences between the two:

  • Deadlock involves a specific situation where multiple threads or processes are stuck waiting for each other in a circular dependency. It results in a complete system freeze.

  • Starvation, on the other hand, refers to a condition where one or more threads or processes are continuously denied access to resources, leading to delayed or no progress. The system remains functional but with reduced efficiency.

Deadlocks

What is Deadlocks?

Deadlock: Deadlock refers to a situation where two or more threads or processes are indefinitely waiting for each other to release resources, resulting in a system-wide halt. Each thread or process holds a resource that another thread or process needs to proceed, leading to a cyclic dependency. As a result, none of the threads or processes can make progress, and the system becomes deadlocked.

Effects of Deadlock

Deadlock can have several significant effects on a system, impacting its functionality, performance, and overall user experience. Here are some common effects of deadlock:

  1. System Freeze: Deadlock can cause the entire system to freeze or become unresponsive. The deadlock situation prevents any progress from being made, resulting in a complete halt of the system's execution. This can lead to system crashes or the need for manual intervention to recover from the deadlock.

  2. Resource Waste: When deadlock occurs, resources are held by threads or processes that are waiting for other resources to be released. As a result, these resources remain unused and cannot be utilized by other threads or processes that might need them. This resource waste can decrease the system's overall efficiency and performance.

  3. Reduced Throughput: Deadlock reduces the system's throughput, which is the number of processes or threads that can be completed within a given timeframe. Since deadlocked processes or threads cannot proceed, the overall number of completed tasks decreases, resulting in a decrease in the system's productivity and efficiency.

  4. Increased Response Time: When deadlock occurs, threads or processes that are waiting for resources are stuck in a blocked state. This increases the response time for tasks that depend on these blocked threads or processes. Consequently, users may experience delays or unresponsiveness in the system, leading to a degraded user experience.

  5. Difficulty in Debugging: Deadlock can be challenging to diagnose and debug, especially in complex systems with multiple interacting threads or processes. Identifying the root cause of the deadlock and finding a resolution often requires careful analysis of the system's state and dependencies. This adds complexity and time to the debugging process.

  6. Need for Manual Intervention: In severe cases of deadlock, manual intervention may be required to recover the system. This could involve restarting the system, terminating affected processes or threads, or releasing the locked resources forcibly. Manual intervention adds operational overhead and disrupts the normal functioning of the system.

The above points have been mentioned below in the concise readable manner

  1. System Freeze: Deadlock causes the entire system to freeze, resulting in a complete halt of execution.

  2. Resource Waste: Deadlock leads to unused resources, reducing overall system efficiency.

  3. Reduced Throughput: Deadlock decreases the number of completed tasks within a given timeframe.

  4. Increased Response Time: Deadlock increases the response time for tasks dependent on blocked threads or processes.

  5. Difficulty in Debugging: Deadlock can be challenging to diagnose and resolve in complex systems.

  6. Need for Manual Intervention: Severe deadlock situations may require manual intervention to recover the system.

What are the necessary conditions for a Deadlock to Occur?

  1. Mutual Exclusion: At least one resource in the system must be held in exclusive control by one process or thread at a time. This means that once a process or thread holds a resource, no other process or thread can access it until it is released. Mutual exclusion is a fundamental property of many resources, such as printers, files, or database records.

  2. Hold and Wait: A process or thread must be holding at least one resource while waiting to acquire additional resources. In other words, a process that is already holding resources can request additional resources without releasing the ones it currently holds. This condition creates the potential for resource contention and can contribute to deadlock if multiple processes are holding different resources and waiting for others.

  3. No Preemption: Resources cannot be forcibly taken away from a process or thread. Only the process or thread holding a resource can voluntarily release it. This means that if a process or thread is holding a resource and needs additional resources, it must wait until those resources become available, leading to potential deadlock situations.

  4. Circular Wait: A circular chain of two or more processes or threads exists, where each process or thread is waiting for a resource held by another process or thread in the chain. This circular dependency creates a situation where no process or thread can progress, resulting in a deadlock.

Methods for Deadlock Handling

Strong advice prefer this video Gate Smashers Deadlock Video

Prevention or Avoidance :

Making sure at least one condition should not get out of the four conditions which are:

  • Mutual Exclusion

  • Hold and Wait

  • No Preemption

  • Circular Wait

If at least one of them is not satisfied then deadlock can be avoided

Detection or Recovery

Deadlock Ignorance