7.What are threads? Difference between preemptive and cooperative multithreading?
In preemptive multithreading, threads are preempted while they are in execution, by some other high priority thread before they're done with execution... in co-operative multithreading the threads can only voluntarily yield and cannot be preempted
- A thread is a lightweight process
- A thread is contained inside a process.
- Multiple threads can exist within the same process and share resources such as memory, while different processes do not share these resources.
- The threads of a process share the process's instructions (its code) and its context (the values that its variables reference at any given moment).
- On a single processor, multithreading generally occurs by time-division multiplexing (as in multitasking): the processor switches between different threads. This context switching generally happens frequently enough that the user perceives the threads or tasks as running at the same time.
- On a multiprocessor (including multi-core system), the threads or tasks will actually run at the same time, with each processor or core running a particular thread or task.
- Many modern operating systems directly support both time-sliced and multiprocessor threading with a process scheduler.
- The kernel of an operating system allows programmers to manipulate threads via the system call interface.
- Some implementations are called a kernel thread, whereas a lightweight process (LWP) is a specific type of kernel thread that shares the same state and information.
- Threads differ from traditional multitasking operating system processes in that:
- processes are typically independent, while threads exist as subsets of a process
- processes carry considerably more state information than threads, whereas multiple threads within a process share process state as well as memory and other resources
- processes have separate address spaces, whereas threads share their address space
- processes interact only through system-provided inter-process communication mechanisms
- context switching between threads in the same process is typically faster than context switching between processes.
- In such a case, the programmer needs to be careful to avoid race conditions, and other non-intuitive behaviors. In order for data to be correctly manipulated, threads will often need to rendezvous in time in order to process the data in the correct order. Threads may also require mutually exclusive operations (often implemented using semaphores) in order to prevent common data from being simultaneously modified, or read while in the process of being modified. Careless use of such primitives can lead to deadlocks.
- In a single-threaded program, if the main execution thread blocks on a long-running task, the entire application can appear to freeze. By moving such long-running tasks to a worker thread that runs concurrently with the main execution thread, it is possible for the application to remain responsive to user input while executing tasks in the background.
- Operating systems schedule threads in one of two ways:
- Preemptive multitasking is generally considered the superior approach, as it allows the operating system to determine when a context switch should occur. The disadvantage to preemptive multithreading is that the system may make a context switch at an inappropriate time, causing lock convoy, priority inversion or other negative effects which may be avoided by cooperative multithreading.
- Cooperative multithreading, on the other hand, relies on the threads themselves to relinquish control once they are at a stopping point. This can create problems if a thread is waiting for a resource to become available.
- Need to add more information on Lock Convoy ,Priority inversion, Deadlocks, Race Condition , Semaphoes, Mutex.
No comments:
Post a Comment