View Course Path

Memory Management in Operating Systems – Simple Explanation

Memory Management is an essential function of the Operating System. This comes under one of the two main functions of an Operating System, resource management. Main memory (RAM) is where most of the applications run. It is one of the main things people look for when buying a new phone or a new laptop.

Memory Hierarchy in Computers

What users want is a private, infinitely large, infinitely fast memory that is also nonvolatile. Nonvolatile means that it will still hold the data if the computer accidentally switched off. But this is too expensive for us, and the current technology doesn’t support this.

Engineers have come up with three different variants of memory to be used inside a computer.

  1. Cache Memory: A small size of memory that is incredibly fast, expensive, and volatile.
  2. Random Access Memory: A medium-sized memory of a few gigabytes, medium-priced, medium-fast, and volatile.
  3. HDD or SSD: A large-sized memory of up to a terabyte of storage, cheap, slow, and nonvolatile.

The operating system has something called the memory manager. It’s the job of the memory manager to efficiently manage memory which includes:

  • keeping track of which parts of memory is in use.
  • Allocate memory to processes.
  • Free up the memory after the processes have used it.

It becomes the job of the OS and the memory manager to manage this memory between applications to keep the computer from going into a deadlock. On the off chance, it does go into a deadlock, the OS knows how to deal with deadlocks too.

What are memory management requirements?

Let’s face it. Processes require memory to run. These days you can find memory sizes of up to 32 gigs. Even with this, the sizes of the applications have increased, and they need optimally allocated memory to run. The requirement from memory management is always to keep memory available for the currently running processes.

Why do we need memory management in OS?

Now that we know what memory management is about let’s see why we need it. The following are the reasons we need memory management.

Relocation

When we generally work on a multiprogramming system, several processes are running in the background. It isn’t possible for us to know in advance which other programs will reside in the main memory and when we’ll execute our processes.

To solve this, the memory manager takes care of the executed and to be executed processes and allocates and frees up memory accordingly, making the execution of processes smooth and memory efficient.

Protection

With the execution of multiple processes, one process may write in the address space of another process. This is why every process must be protected against unwanted interference by any other process. The memory manager, in this situation, protects the address space of every single process. Keeping in mind the relocation algorithm too. The protection aspect and the relocation aspect of the memory manager work in synchronization.

Sharing

When multiple processes run in the main memory, it is required to have a protection mechanism that must allow several processes to access the same portion of the main memory. Allowing each process the access to the same memory or the identical copy of a program rather than having a copy for each program has an advantage of efficient memory allocation. Memory management allows controlled access to the shared memory without compromising the protection.

Logical Organization

Memory is a linear structure of storage that consists of some parts (of data), which can be modified along with those which can’t be. The memory management allows the allocation, use, and access of memory to the user programs in a manner that does not make chaos by modifying some file which was not supposed to be accessed by the user. It supports a basic module that provides the required protection and sharing. The management modules are written and compiled independently so that all the references must be resolved by the system at run time. It provides different modules with different degrees of protection and also supports sharing based on the user specification.

Physical Organization

The structure of the memory consists of the volatile main memory and secondary non-volatile memory. Applications are stored in the secondary memory, which is the hard drive of your computer. But when you run an application, it moves to the main memory, the RAM of the system. To maintain the flow of these transfers from the main memory to the secondary memory with ease, proper management of memory is required.

What is Memory Partitioning in OS?

Let’s move on to memory partitioning. For better utilization of memory and flow of execution, we divide the memory into different sections to be used by the resident programs. The process of dividing the memory into sections is called memory partitioning. There are different ways in which memory can be partitioned:

Fixed Partitioning/Static Partitioning

In fixed partitioning, the number of non-overlapping partitions in RAM is fixed, but the size of each partition may not be the same. As the allocation of memory is contiguous, no spanning is allowed. In fixed partitioning, the partitions are made either before execution or during system configuration.

Dynamic Partitioning

In dynamic partitioning, the primary memory is emptied, and partitions are made during the run time according to the needs of the different processes. The size of the partition will be equal to the incoming process. The number of partitions will not be fixed and will depend on the number of incoming processes. It will also depend on the size of the main memory. The partition size varies according to the need of the processes.

Buddy System

In static partitioning, we suffer from the limitation of having the fixed number of active processes, which at times leads to inefficient usage of space. The buddy system is a memory allocation and management algorithm that manages memory in power of two increments. For instance, if we have a memory of size 2U and we require a size of S, then if 2U-1<S<=2U, then it allocates the whole block,

  • else, recursively divide the block equally and test condition each time.
  • Buddy system also keeps the record of all the unallocated blocks and can merge different blocks too. It is easy to implement and allocates a block of the correct size, but it requires all allocation units to be powers of two.

Fragmentation in Operating Systems 

After the partitioning of memory comes the fragmentation. When the processes are loaded to and removed from memory, the memory space that gets freed breaks into little pieces. Now, when this happens, no further processes can be allotted memory as their size becomes too small for that, and hence the memory blocks remain unused. The solution to this problem is known as fragmentation. In other words, fragmentation can be defined as the issue of a memory that arises when processes are loaded to and removed from the memory, breaking it into pieces. The fragmentation is mainly of two types:

  1. External fragmentation – In external fragmentation, the total memory space is enough to satisfy a request or to reside a process in it. But it is not contiguous, so it cannot be used.
  2. Internal fragmentation – In internal fragmentation, the memory block assigned to processes is bigger. Some portion of the memory may be left unused, as it cannot reside in any other process.

Swapping in OS 

Swapping is a technique for making memory compact. It is a mechanism that is used to temporarily swap processes out of the main memory to secondary memory, and this makes more memory available for some other processes. At some later time, the system can swap back the process from the secondary memory to the main memory.

Swapping does affect the performance of the system, but it helps in running multiple processes parallelly. The total time taken by the swapping of a process includes the time it takes to move the entire process to the secondary memory and then again to the main memory.

Different types of Memory Management Techniques

The operating system has to manage free memory as well as do its necessary operations.

  • Managing free memory using a linked list
  • Managing free memory using bitmap
  • Memory management using paging
  • Memory management using segmentation
  • Memory management using virtual memory

When the memory is allocated and de-allocated dynamically, the operating system must be able to manage it. So, to keep track of memory usage, the operating system, generally, uses two ways:

Memory management using bitmap

In case of using a bitmap, the memory is firstly divided into allocation units, corresponding to each allocation unit, a bit is assigned in the bitmap. A bit is 0 if the unit is free, else it is 1. In this way of memory management, the allocation size is a design issue, as the number of bits increases along with the number of allocation units.

Memory management using a linked list

Another way to manage memory is to use a linked list to keep track of all the allocated and free memory segments. The segment list is kept sorted by address, making the swapping of processes easy. Each entry list specifies either a hole or a process, the starting address and the pointer next to the entry.

Memory Paging

With dynamic memory allocation, there is a possibility that the memory occupied can be non-contiguous. In that case, to manage memory efficiently, we use a technique called paging. Paging is a memory management technique which allows the memory allocation to be non-contiguous. Or in other words, the mapping from virtual to a physical address is known as the paging technique.

A page table is used to store the mapping in paging. A page table is a data structure used by the virtual memory to store the mapping between the logical and the physical addresses. CPU is responsible for generating logical addresses, and the processes generally use them. On the other hand, physical addresses are the actual frame addresses of the memory.

So, for instance, if

Physical Address Space = M words

Logical Address Space = L words

Page Size = P words

Physical Address = log 2 M = m bits

Logical Address = log 2 L = l bits

page offset = log 2 P = p bits, then the page table can be made as

Memory Segmentation

In an operating system, segmentation is a memory management technique in which the memory is divided into segments of variable size and are allocated to a process. The details about each segment are stored in a segment table, and the segment table is stored in one of the segments itself. The segment table contains only two pieces of information, namely base, which gives the base address of the segment and the limit, which tells about the length of the segment.

A CPU generated logical address has two parts, namely, segment number and offset. The segment number is mapped to the segment table. The segment is compared with the offset. If the offset is less than the limit address only, then the address is valid. And if the address is valid, the base address of the segment is added to the offset to get the physical address.

 

Virtual Memory

Virtual memory can be defined as a storage allocation scheme in which secondary storage is accessed as though it is a part of the main memory.

It is a technique that is implemented using both hardware and software. In the case of virtual memory, the size of virtual storage is limited by the addressing scheme of the computer, and the amount of secondary memory available. Virtual memory maps the memory addresses used by a program to physical addresses. It is implemented using Demand Paging or Demand Segmentation.

Related courses for this will be up soon!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.