Windows 95/98

Home
Windows Support Menu

 

Windows 9X

Processes and Threads

Windows 95 is multithreaded, meaning that more than one thread of execution (or thread) can run in a single task or application at once.

What Is a Process?

A process is essentially an program. Each process has memory and access to system resources. A process comprises:

  1. A runable program that defines initial code and data.
  2. A memory address space where the process's code and data are stored.
  3. System resources such as files and windows.
  4. At least one thread to run the code. Microsoft MS-DOS̉ and 16-bit Windows-based applications have only one thread per process. 32-bit Windows-based applications may have many threads per process.

What Is a Thread?

A thread is a unit of execution. It is the actual component of a process that is running at one time. It runs in the address space of the process, using resources allocated to the process.

Note: Ownership of the resources occurs through the process, not the threads. Threads use the resources, but the process maintains ownership. For example, if an application requests use of a port, that port is controlled by the process. Any of that process's threads may use the port, but a thread may not request the port.

A thread comprises:

  1. A processor state including the current instruction pointer (register).
  2. A stack for use when running in user mode.
  3. A stack for use when running in kernel mode.

In a multithreaded program, the programmer is responsible for making sure that the different threads do not interfere with each other. This can be accomplished by using these shared resources in a way that does not conflict with another thread's use of the same resource.

Multitasking

Preemptive

In a preemptive multitasking system, each thread is run a preset amount of time or until another thread with a higher priority is ready to run. Because the scheduling is handled by the operating system without the cooperation of the application, it is more difficult for a program or thread to monopolize the processor. In order to prevent threads in different processes from accessing a resource that cannot be shared (such as a serial port) semaphores (special flags used by the program) can be set by the program that lock that resource until finish using it.

In Windows 95, MS-DOS and 32-bit Windows-based programs are preemptively multitasked.

Cooperative

In cooperative multitasking (also known as non-preemptive) a thread runs until it voluntarily relinquishes the processor. The program determines when the thread stops running.

In Windows 95, 16-bit Windows-based programs are cooperatively multitasked with regard to each other. All the 16-bit Windows-based programs together are treated as a single task to the preemptive multitasker. This hybrid multitasking is necessary to maintain compatibility with 16-bit Windows-based programs that expect to control their own execution.

Thread Priorities

Each thread has a base priority. The priority determines when a thread is run in relation to other threads in the system. The thread with the highest priority gets use of the processor. Threads in a process may have their base priority altered by as much as two levels up or down.

There are 32 priority levels. The scheduler can change the priority of application threads.

Scheduling

Scheduling is the process of determining which thread has use of the processor. This process is based on a predetermined unit of time called a time slice. The actual length of a time slice depends on the configuration of the system.

There are two scheduler functions, the Primary and Secondary schedulers. The Primary scheduler looks at all threads running and gets their priority numbers. The Primary Scheduler then compares priorities and assigns resources accordingly. The thread with the highest priority thread runs. If two or more threads have the same priority, the threads are put into a stack, the top one runs for a time slice, drops to the bottom of the stack, and the next thread on the top of the stack runs for a time slice, drops to the bottom, and so on. Each thread is run one time slice, then that thread moves to end of the stack. This continues until there are no more threads with the same priority.

After this execution cycle the Secondary scheduler boosts priorities of nonexecuting threads. This allows threads which may have a lower starting priority to eventually have a higher priority and run. This prevents a low priority thread from being blocked and prevented from running at all.

Dynamic Priority Adjustment

The Secondary scheduler also changes the priorities of threads over time to smooth the overall operation of programs. Depending on the type of work the thread is doing, Windows 95 may adjust the thread's priority upward or downward from its base priority. For instance:

  1. Threads that are waiting for user input (threads in the foreground process) get a priority increase. This makes the system more responsive to the user.
  2. Threads get a priority increase after completing a voluntary wait.
  3. All threads periodically get a priority increase to prevent lower priority threads from holding locks on shared resources that are needed by higher priority threads.
  4. Compute-bound threads get their priorities lowered. This prevents I/O operations from being blocked.

Priority Inheritance Boosting

What happens if a low-priority thread is using a resource needed by a high-priority thread?

In this case, Windows 95 uses a method called priority inheritance boosting.

In the above example, Threads A, B, and C are running sequentially when Thread C accesses a resource.

During the next evaluation Thread A requires the same resource that Thread C has accessed. However, because Thread C is using that resource, Thread A, even though it has a higher priority, is blocked from continuing to run. Threads B and C continue to run normally during that time slice.

During the next evaluation, the Secondary scheduler boosts the priority of Thread C to the same priority as Thread A, allowing it to run faster.

Thread C continues to run until it releases the resource needed by Thread A.

Once the resource is released, Thread C's priority is lowered and Thread A starts running.