Operating Systems Concepts

 Hello friends!

This week's blog post is going to be a long one! I’ll be including a quick overview of many of the different concepts regarding the components of an operating system!

Operating Systems



 

Operating Systems Major Functions

Operating systems essentially act as the manager and translator for the entire computer. Operating systems are essentially here to ensure that when someone wants to do something on their computer, they aren't forced to sit down and treat the function as if they were doing math on an abacus. This management is essential to getting the end product you see when sitting down to a personal computer. That said, we can get into more specifics.

 

1. Process Management-

At the very base, we have the section that requires the most "translation." The OS creates and terminates different processes, allocates CPU time and memory, and manages the communication flow between different processes. In simpler terms, this is what comes into play when starting up a program or switching between them. 

2. Memory-

The OS is also needed to manage how RAM acts in different processes. It can allocate the right amount of memory to get the job done. This ensures no crashes and that things run smoothly, as well as taking care of simple things like loading a program from the hard drive into RAM.

3. Storage Management/Security-

Operating systems are responsible for managing files and directories on both internal and external storage devices.  It also takes care of who can access files by managing permissions. Not only can it take care of who can access it when authorized, but it can also ensure that no one who isn't authorized can even use the computer in the first place. This is done by using features like firewalls and working with third-party antivirus.

4. Device Management-

Operating systems manage hardware devices as well as software. Drivers are used to teach the OS about particular hardware so that they may communicate better. This could be something as simple as a keyboard, mouse, or printer.

Hierarchy-

When I think of the hierarchy for an operating system, I like to think of the translation I was referring to at the start of my post. Starting with the Kernel, this is our core translator. This maintains resources and interacts directly with hardware. Then, we have system libraries, which essentially act as a codex. The system libraries provide instructions so that applications can communicate with the kernel. They act as a translator. Then, we have system utilities that perform tasks at a system level. These exist around everything else, so it's difficult to place them in the hierarchy. These basic tasks can be accomplished without outside applications, such as managing files or monitoring performance. Last, we have the applications themselves. This is what the end user sees when turning on their monitor.

Whether it be games, browsers, video editing software, or Microsoft Office, these are what most people think of when thinking of computers. Applications make requests of the operating system through system libraries. The libraries then talk with the kernel to access what is needed to run the programs. Then, the kernel interacts with the hardware to perform what is being asked of it.

 

Processes

Process- Our text describes a process as a program in execution or a unit of work in a modern time-sharing system. Processes are the basis of all computation.

 

Processes are comprised of:

·       The current activity (represented by the value of the program counter).

·       The process stack, which will include temporary data. (function parameters, local variables)

·       The data section, which contains global variables.

·       A heap or memory that is dynamically allocated during run time.

Processes exist in several states, as seen in our text and our other discussion this week:

 

·       New: A process is in the 'new' state when it's being created. The operating system is preparing the necessary resources for the process to run.

·       Ready: The process has been loaded into memory and is ready for execution. It's waiting for its turn to use the CPU.  

·       Running: The process is currently being executed by the CPU.  

·       Waiting: The process is paused, waiting for an event to occur, such as I/O completion or a resource to become available.  

·       Terminated: The process has finished execution or has been terminated by the operating system.

Process Control Block (PCB): The PCB is a data structure that stores all the information needed to manage a process. This includes:

 

·       Process ID: An identifier for the process.

·       Process state: The current state of the process.

·       Program counter: The address of the next instruction to be executed.

·       CPU registers: The values of the CPU registers.

·       Memory information: Information regarding the amount of memory used for the process.

·       I/O status: Information about the process's I/O operations.

·       Scheduling information: Shows process priority.

 

Single-threaded vs. Multi-threaded (motivations and models)

 

Single Threaded: Processes with a single thread of execution execute instructions one at a time in order.

Multi-threaded: With a multi-threaded process, there are multiple threads going all at once. They share resources, and each has its own stack and registers.

The best analogy I've found for the difference is with laundry. When you're doing laundry, if you had to wait until clothes were dry to put another load in the washer, things would take much more time. This is what you would expect with a single-threaded process. On the other hand, multithreaded is like doing laundry the way most of us do, where as soon as there are clothes in the dryer, we go ahead and put another load in the washer.

 

However, the benefits of multithreading are a bit more complicated than speed. It also allows for better allocation of resources and ensures more responsiveness. If the thread is blocked or in a "wait" state, the others can continue to do what they need to do to keep things running. Multithreading also allows for multiple processors in a system, which allows parallelism and increases performance for the same reasons described in my laundry analogy.

 

Some models of Multi-threading include:

 

1.       One-to-one: This is where each user thread is mapped to a specific separate kernel thread. This requires more resources, although it offers better concurrency.

 

2.       Many-to-one: This is where there are many user threads that are mapped to a single kernel thread. With many-to-one, there is greater efficiency, but if one thread is blocked, the process itself blocks.

 

3.       Many-to-many: As you can guess from the other two, this is many user-level threads mapped to many kernel threads. This offers a balance between the systems.

 

The Critical Section Problem:

 

Our text describes a critical section as a "segment of code in which the process may be changing common variables, updating a table, writing a file, and so on."

 

As you can imagine, this has the potential to be problematic. It is important that there isn't more than one process trying to execute in the critical section at the same time. The problem requires a protocol design that allows the processes to cooperate properly. By placing something in place that requires each process to get permission to enter the critical section, issues can be avoided.

This brings us to a software solution known as Peterson's Solution. Peterson's solution meets the three requirements to solve the critical section problem:

1.       Mutual exclusion: As mentioned before, this ensures no two processes are executing in their critical sections at the same time.

2.       Progress: If there isn't currently a process in the critical section, then the only processes that are eligible to enter the critical section are those that are not in their remainder sections. This cannot be postponed indefinitely.

3.       Bounded waiting: A limit on the number of times that processes are allowed to enter their critical sections after a process has made a request to enter a critical section and before its access is granted.

 

Note: Peterson's solution will only work for two processes at once.

 

 

 

Memory Management:

The purpose of memory management in an operating system is to ensure that a computer system works reliably and efficiently. With that in mind, memory management has several objectives. First, it aims to ensure that resources are utilized in a way that works for every part of the computer system. It does this by maximizing the use of memory by allocating it to processes in the most efficient way possible. Another way memory management increases reliability is through process isolation, which prevents processes from interfering with other processes' memory. This ensures that if there is an issue or something malicious in the system, it's not able to affect other processes and mess with the operating system as a whole. Another way it can increase efficiency is through address translation. This process involves converting the logical address into the physical addresses of actual memory locations. Doing this acts as a sort of translator and allows processes to run with any part of memory without needing physical addresses. The last important objective I'll include here is memory protection. Memory protection ensures that the user doesn't accidentally destroy the system by overwriting something important. Without this protection, managing a computer system would be a stressful nightmare.




Basic Memory Functions:

  • Sharing: Allows inter-process communication through the use of shared memory regions. This helps reduce memory usage.
  • De-Fragmentation:  If you've owned a Windows computer, you've likely, at some point, performed defragmentation. The purpose of this is to make use of all the fragments of memory that are unusable on their own and compile them.
  • Allocation/Deallocation: The process of assigning memory space either when created or unassigning when processes terminate and no longer need the memory.
  • Tracking/Managing: Keeping track of what memory is in use and what is free to allocate elsewhere.

Physical Address Space Vs. Virtual Address Space:

Physical: The physical address space represents the actual memory locations in RAM. It is used directly by hardware components to access memory. Physical address spaces are limited by how much RAM is currently installed.

Virtual: The virtual address space acts much like virtual machines on a cloud network do. It is a set of logical addresses that are assigned to the processes rather than the physical hardware itself. Using virtual memory allows for much more "space".

Memory Mapping: 

Memory mapping allows for the translation between virtual and physical addresses in RAM. This is done in a few different ways. The first two are paging and segmentations. With paging, the virtual address spaces are divided into pages, and physical memory is into frames. The page table maps virtual pages to their physical frames. With paging, the pages can be all over physical memory without being contiguous, which helps with fragmentation. With segmentation, the virtual address spaces are divided into tiny segments, with each segment being an entry in the table mapping to a physical space. This is like paging, but rather than having fixed sizes, the segments are variable in size. There is also a third option, which is to use a combination of the two. This is the best way to fit the specific needs of the system, as it allows resources to be allocated properly. The best way I can describe it would be like a game of Tetris. Having many different types of blocks makes it a bit easier to fill things in when things start to get crazy and fragmented.

Without the use of virtual space, using a computer's memory would be miserable. The virtual address space is what gives computers the feel of a seamless transition when it comes to memory. Without it, it would feel like every time you needed to run a process, you had to find the specific part of your memory that would fit in and ensure it would stay within those confines. 

 


File Systems Management

File systems management is meant to act as an overseer of stored data. It performs the task of creating, deleting, copying, and moving different files and directories. The purpose of file system management is to organize files efficiently across the system for easy access, as well as to ensure remaining and used space is managed properly. Ensuring data is properly allocated can provide stability and ensure that resources are available for the user whenever needed. For reliability, file systems can use techniques like redundancy, checksums, and journaling to allow the system to recover from failures and ensure data integrity.

Different Directory Structures:

  • Single-Level: This is the basic structure in which all files are in one directory. Useful for small amounts of files, but not ideal for larger amounts of files.
    A diagram of a single level direct

AI-generated content may be incorrect.
  • Two-Level: Similar to single-level, but creates a separate directory for each user. Used in the same way as single-level but for multiple users on one system.A diagram of a diagram

AI-generated content may be incorrect.
  • Tree-Structured: Creates a hierarchy of directories and files to allow for grouping of files for better organization. This is what most are generally used to with systems like Windows. For an example, look no further than your file explorer.

A diagram of a tree structure

AI-generated content may be incorrect.

  • Acyclic/General Graph: Uses the same structuring as tree-structure but also allows you to link between directories. This is good for project management and sharing information between users. Describe different types of input/output devices, distinguishing between the hardware and software layers and summarizing the integration across I/O and memory components.A diagram of a company

AI-generated content may be incorrect.

Input/Output Devices

Input and output devices are as their names imply. Input devices are what we can use to input information into the system, such as your keyboard, mouse, or even webcams and microphones. On the other hand, output devices are what we can use to get information from the system, such as monitors, speakers, printers, projectors, etc. There are also storage devices, such as hard drives or flash drives, that can be used for input/output. The idea is that you can get information from or send information to these devices. The last type of I/O device would be those used for communication, like modems, gateways, or network cards.

The hardware layer refers to the actual physical device, the electrical signals it sends, and any mechanical movements. The software layer would refer to things like operating systems and device drivers. Each of these acts as a translator to ensure the user has the smoothest experience possible and removes the user from the complicated inner workings. I/O integration consists of the CPU telling a device what to do via registers or memory-mapped I/O. The device controller is able to handle the actual input and output. Data will move between the devices and memory, sometimes using direct-to-memory and bypassing the CPU altogether. Once this process is completed, the CPU is alerted, and the operating system makes what is needed ready for the program to work.

 




 

Security

In today's insane computing landscape, protection mechanisms are incredibly important. We strive for isolation, least privilege, controlled access, and fault containment, primarily through domain and language-based protection. Domains act as security contexts that define what resources a process is able to access and how it can get to it. Language-based protection leverages type and memory safety and increases security down to the code level.

Something paramount to his protection is the access matrix. The access matrix uses domains as rows and objects as columns to define access rights. For instance, a cell might indicate that a domain has "read" access to object "A" and "write" access to object "B". When a process tries to access a particular resource, the system checks the matrix and makes sure that only authorized actions can occur.

A diagram of a matrix

AI-generated content may be incorrect.

Protection programs, systems, and networks require a multi-faceted approach. Programs benefit from input validation, code review, and memory protections such as ASLR and DEP. Regular software updates are used to ensure no new vulnerabilities are exposed. Furthermore, we have access control, authentication, firewalls, and encryption at a system level to ensure better security as well as intrusion detection systems through antivirus programs that monitor the system for malicious activity. Networks can be secured through firewalls, a VPN, or secure protocols such as HTTPS or SSH. Network segmentation also limits the damage if there is a breach. All of this, combined with regular auditing of security systems as well as penetration testing, can ensure the security of any modern computer system. Constant vigilance is required to ensure this, however, and even a single missed update can be a problem. 

Finally, we have security at the user level. User-level security refers to things for security that the user actively participates in. This refers to things like creating strong passwords or even using multi-factor authentication. It can also refer to making sure software is updated and ensuring permissions are accurate and necessary for whichever user is entering the system. You wouldn't want a front-line employee to have administrator permissions. Another large factor in security is education. Proper education can ensure that a phishing email or scam call remains a simple inconvenience rather than a system-wrecking nightmare. Proper training of those using the system is crucial to ensuring the security of all users and resources.

A diagram of security

AI-generated content may be incorrect.



Comments

Popular posts from this blog

Java and Object Oriented Programing (OOP)

Algorithmic Design and Data Structures For Newbies