Concepts of Operating Systems

a post for students in my community college course

A modern computer is an incredibly complex system consisting of processors, memory, hard disks, network interfaces, and a myriad of other input/output devices. If every application programmer had to understand the intricate details of how all these hardware components work, no software would ever get written. To manage this complexity, computers are equipped with an essential layer of software called the operating system.

The operating system fundamentally serves two purposes. First, it acts as an extended machine, hiding the messy, ugly hardware and presenting programs with beautiful, clean, and consistent abstractions. Second, it acts as a resource manager, bringing order to the potential chaos by efficiently and securely multiplexing the hardware among multiple competing programs and users. To understand how the operating system achieves this, we must explore its foundational concepts.

1. Processes and Threads

The most central concept in any operating system is the process. A process is essentially an abstraction of a running program, encompassing its memory, registers, program counter, and current state. In a multiprogramming system, the CPU rapidly switches back and forth between multiple processes, giving users the illusion that dozens of programs are running simultaneously in parallel.

To effectively model this multiprogramming, a scheduling algorithm is used to determine when to stop work on one process and service a different one. When the operating system suspends a process, it saves all of the process’s information in an internal structure called the process table. This ensures that when the process is eventually restarted, it continues from the exact state it was in before it was stopped. Processes can also spawn new “child processes,” creating a deep hierarchical tree of execution.

Operating systems also support “threads,” which are essentially lightweight miniprocesses that run within a single process. While different processes have completely isolated memory, multiple threads within the same process share the exact same address space, global variables, and open files. Threads are incredibly useful for modern applications. For example, a word processor might use one thread to handle keyboard input, a second thread to continuously reformat the document, and a third thread to save backups to the disk—all working cooperatively in the background.

2. Address Spaces and Virtual Memory

Every computer has main memory (RAM) used to hold executing programs. If multiple processes are to reside in memory at the same time, the operating system must protect them from interfering with one another. It achieves this through the abstraction of an “address space”. An address space is a list of memory locations ranging from zero to some maximum that a specific process is allowed to read and write.

But what happens if a process’s address space is larger than the computer’s available physical RAM? Modern operating systems solve this elegantly using a technique called virtual memory. Virtual memory decouples the process’s address space from the machine’s physical memory. The OS keeps the most heavily used parts of the program in main memory and stores the rest on the hard disk, rapidly shuttling pieces back and forth as needed.

3. Files and Directories

While processes and memory manage active computations, computers also need persistent, long-term storage that survives process termination and system crashes. The operating system provides this through the abstraction of “files,” shielding the user from the messy details of how data is actually stored on a disk.

To keep files organized, operating systems group them into directories (or folders), creating a hierarchical tree structure. Files within this tree are identified by their path name, which can be an absolute path starting from the root directory or a relative path starting from the process’s current working directory.

In UNIX-based systems, the file concept is incredibly versatile. It includes “special files” that make I/O devices look like regular files, allowing programs to read from or write to hardware using standard file operations. UNIX also utilizes the “mount” system call to seamlessly attach the file systems of removable media—like optical discs or USB drives—directly onto the main directory tree. Additionally, “pipes” act as pseudofiles that connect two processes, allowing the output of one process to flow directly as the input to another.

4. Input/Output (I/O) Management

A computer wouldn’t be very useful without physical devices to acquire input and produce output, such as keyboards, monitors, and printers. The operating system features an I/O subsystem dedicated to managing these devices and providing device independence.

This subsystem relies heavily on “device drivers”. A device driver is a specialized software module that understands the intricate, low-level mechanics of a specific hardware controller. The driver handles the hardware’s quirks and presents a standardized interface to the rest of the OS. This abstraction ensures that application programmers don’t need to write custom code for every new brand of hard drive or printer; they simply interact with the OS’s clean abstractions.

5. Protection and Security

With numerous users and processes sharing a system, the operating system must aggressively manage security. Computers house vast amounts of confidential information that must be protected from unauthorized access.

Operating systems enforce security by assigning ownership and access permissions. In UNIX, every authorized user is assigned a User ID (UID), and every process runs with the UID of the person who started it. Files are protected by a 9-bit binary code that specifies read, write, and execute permissions separately for the file’s owner, the owner’s group, and everyone else. A special user, known as the superuser (or Administrator in Windows), has the power to bypass these protection rules to perform system maintenance.

6. System Calls and The Shell

How do user programs actually ask the operating system to create files, allocate memory, or spawn processes? They do so using “system calls”. A system call is like a special kind of procedure call that executes a trap instruction, switching the CPU from user mode (where programs normally run) to kernel mode (where the OS operates with full privileges). Once the OS identifies what the calling process wants by inspecting the parameters, it carries out the work and returns control to the instruction immediately following the system call.

Finally, while not technically part of the operating system itself, the “shell” (or command interpreter) is the primary interface through which users interact with these OS concepts. The shell is a user-mode program that reads terminal input and executes commands. When a user types a command, the shell creates a child process to run the requested utility, effortlessly handling advanced features like background execution and input/output redirection.

Conclusion

From abstracting the CPU into processes and threads, to managing physical RAM through virtual address spaces, to simplifying disks into files, the operating system is a master of illusion. By implementing these core concepts, the OS transforms the chaotic, complex reality of hardware into the highly structured, beautiful, and secure environment that makes modern computing possible.