The Thread Abstraction

September 23, 2016

A process is a data context and one or more threads that run in that context. A thread is the ability or the actuality of executing a sequence of instructions. The computer instructions are loaded into memory is a part of the context. A thread is the running or potential running of that code.

A thread is an ability of the hardware. A modern CPU can execute multiple threads. Each thread is roughly speaking an Instruction Pointer. The CPU will fetch the instruction at the memory location indicated by the pointer, execute the instruction, and go on to the next instruction, as directed by the instruction (e.g. a jump) or by default by incrementing the Instruction Pointer. Although the purist might cringe at my being so hardware-centric in this explanation, I think one must reference the hardware in order to clearly understand a thread. A computer science defined thread is an abstraction of an ability of a CPU to run instructions.

In this case, I believe, the hardware came first. Machines were built that could carry out a list of written instructions. As the machines became more complicated so that they managed their carrying out of the instructions then threads became the thing that was managed. For instance, these machines have memory caches to store recent results. The managing of those caches are also machine actions, but are actions in support of the “thread”, and if multiple threads are running, the cache actions include negotiations between the threads.

Speaking of hardware: a processor chip often has multiple, almost separate CPU’s. This is called multi-core. Each core, however, can be cleverly constructed to execute multiple threads at a time, that is, have multiple Instruction Pointers, and share hardware resources so that the data associated with each thread says correctly associated. This is called hyper-threading. The total number of threads is the product of the number of cores and the hyper-threading multiplicity of each core.

The operating system provides for processes, including the process’ demand for threads. The hardware can really provide a only fixed number of threads. The operating system treats these hardware threads as a resource to assign to software threads, making them active at the correct moment, to simulate an unlimited number of hardware threads. The original word for this was time-slicing as the exact method used in the simulation is to run a few instructions of each thread in rotation, slicing time up into intervals. This innovation was associated with operating systems that had time slicing in the name, such as IBM’s Time Sharing Options (TSO) introduced in 1971. The forerunner was the Dartmouth Time Sharing Systesm (DTSS) begun around 1962-64, as well as MIT’s Compatible Time Sharing System (CTSS), and through MIT’s Multics, influenced Bell Lab’s Unix.

The Unix Fork

In order to understand better this discussion, we will walk through the various ideas using the unix system as an example. Unix has a particularly simple way of handling processes. A process is created by a fork system call. Inside the kernel, the fork call results in the creation of a new process, whose assets are an exact clone of the old process, at the moment of the call, and then the kernel returns to both processes, so there are now two almost completely identical processes running. The old process is called the parent, the new process is called the child.

They are almost identical, but not exactly. The return value to the parent process is the process identifier, or PID, of the new process; the return value to the child process is 0. Zero is never a valid PID.

If the parent and child processes are to act differently after the fork, they will check the return value to fork in an if statement, and branch to different pieces of code. Typically, the goal of the parent is to launch not just a new process, but a new application, running in the process. The child code then would contain an exec system call whose parameter is the name of the application. After the fork, the child only runs the exec call, and the effect of that call is to replace the assets of the process with those of the named application, and then start the thread at the start of the application’s code.

In code, it is simple. For a parent to fork off a new process that will run the directory listing program, /bin/ls,

if (!fork()) exec("/bin/ls");

The unix shell works this way. The shell is the command line program that implements users requests made through typing. Unix allows for any program to be a shell program — it is simply the first program run when the user logs in. Typically it is a proper shell program, that loops endlessly between reading the user’s command and running them.

For the most part, every command is the name of a program to run. To list a directory, type “ls”, and the shell runs the program “/bin/ls”; to copy a file, type “cp”, and the shell runs the program “/bin/cp”. It does this by parsing the command line, and doing a fork-exec with the command name that results from the parsing. The shell then waits for the command to exit before returning to parse another command. The shell and the run command share the terminal window, so the shell must wait or it will read the user input instead of the command reading that input, or will write out prompts mixed in with what the command is writing.

Unix provides a wait system call that implements a wait for the event that the child process goes zombie, a death of child event. It then collects the result value from the zombie process, releases the zombie process to be dissolved by the operating system, and returns this result value as the return value of the wait call.

Over all, it is very easy:

char * parsed_command;
int child_result;
if (!fork()) exec(parsed_command) ;
wait(&child_result);

The Unix threads

When it comes to process creation, the unix fork was not only a mechanism to accomplish process creation, it influenced decisively every process creation primitive that came after it. However, threads were not current currency in 1969 when unix was born, or even in 1988 with Unix System 5 Release 4, (SVR4, as it is better known). The result is that unix thread models are neither pretty nor influential.

Linux has replaced “fork” with “clone” and given a vector of items that are shared or “cloned”. The Linux community is not very strong on theory, however, so despite how wonderful is their practical significance, they rarely know exactly what they are talking about. This prevents any other flavor of unix from following Linux in its innovations, but few are ever tempted.

POSIX has worked as a committee to produce a threads API worthy of a committee. But this is what we have and this is what we must use.

Threads are complicated because programmers are not good at concurrent programming, the notion of two algorithms running side by side sharing data structures. Helper structures tend to be required. Newer languages, such as Java and Python, provide more interesting thread structures, and call upon the compiler to help with their correct deployment. Rather than a thread library, these languages have a Thread Object. The object encapsulates the code the thread runs, and messages can be sent to the object to coordinate the thread with the other threads in the process.

Operating system programmers, however, must be proficient in concurrent programming. Most modern operating systems are concurrent. With multiple cores on a CPU this concurrency is in fact real not simulated. Also, the operating system is responsible for delivering the abstractions used by Java and other higher level languages.

posted in CSC521, Uncategorized by admin

 
Powered by Wordpress and MySQL. Theme by Shlomi Noach, openark.org