From the course: IoT Foundations: Operating Systems Fundamentals

Task execution parallelism

From the course: IoT Foundations: Operating Systems Fundamentals

Start my 1-month free trial

Task execution parallelism

- [Narrator] What is a task? A task may have different meanings in different contexts. A task generally refers to a program, a unit of a program, a process or a thread which can be considered as a lightweight process. Some RTOS uses a term, task or thread interchangeably. Programmatically, a task we referred to in this course is in the form of a sequential function with an infinite loop to do a specific job. But how to implement a task is application specific, where it should be implemented to achieve the best system performance. For example, generally speaking, you may have multiple tasks in your program. One task collecting temperature data and storing it in the memory. And another task, transform the data to the cloud N point. To have the duty of maintaining and controlling tasks, is normally handled by an OS kernel. In this case, there are kernel tasks and user tasks. Kernel tasks are the routine tasks required by the kernel, and the user tasks are defined by the users such as the example, data collection and the transfer tasks we've just mentioned. Every task consumes the hardware processing resource in terms of CPU or processor time and the memory. Let's see a simple example of multitask execution where four tasks, number one to four execute sequentially without competing the CPU time. Is in a deal situation where, for any time unit, a CPU just needs to execute one task. It seems no issue on using the CPU resource, in this example. Executing one task per time is simple but in today's software we usually need to let multiple tasks be executed concurrently. If we illustrate such task execution, in an example, the four tasks may appear to have to be executed at the same time. How an OS deals with this concurrency scenario is the matter of task execution parallelism. Realistically, tasks don't take the same time. However, if we slice the CPU time into equal time slots, we can see that, during some time period, two or more tasks still compete to use the CPU time. This can be considered as extended case based on the last one. Let's look at how the tasks get executed on CPU's. If we have only single core CPU, the CPU cannot physically process multiple tasks in the same time. The way of doing this is to let the task take a share of the CPU time, so, become virtually executed concurrently. If we have a multi-core CPU, say, a dual core CPU, two tasks can be concurrently executed by the two cores at any time. But if we have more tasks for concurrent execution, for each CPU core, we sill need to have a way of sharing the CPU time between tasks, in order to let the task, such as task one and task three, or task two and task four, appear to be executed concurrently. That comes to the multitasking concept, which is the concept of allowing multiple programs or processes to execute simultaneously on the CPU time sharing basis. Multitasking is usually supported by an OS, with task management and the maintenance features. For example, task scheduling in the lightweight context switching features are often seen on the embedded OS. A related concept is multithreading, if we consider a task as a lightweight process, out of one big process, then multithreading is an execution model of allowing CPU to execute such multiple threads concurrently within the same process.

Contents