CanesOS Scheduler: a simple priority scheduler

by: burt rosenberg
at: university of miami
date: oct 2019

Overview

Operating systems manage hardware resources. One resource is the CPU cycles. Each hardware core supports (at least) on physical thread. The CPU associates physical threads to virtual threads in a way to provide satisfactory Concurrency, Efficiency and Latency. (The CEL theory.)

Concurrency:
For most matters, does the sharing of physical threads accurately simulate true concurrency.
Efficiency:
Is the hardware being used efficiently.
Latency:
Do events that require timely handling get timely handling.
There is interplay among these goals. The operating system wants efficiency; the user is allowed to want favoritism at the expense of efficiency. Achieving good latency might lower overall efficiency. Programs which place high demands on the accuracy of the concurrency might of necessity be disappointed.

There are a vast variety of schedulers, and they seem to be a bit of a black art. Linux has a particularly colorful history of schedulers. We go back in time to investigate the unix scheduler, which is still relevant both for FreeBSD and Windows NT. The CaneOS scheduler will take from the unix scheduler many ideas, and will illustrate in a simpler implementation the most important scheduler concepts.

The unix scheduler

The earliest documentation of Unix did not mention its scheduling algorithm. Bach describes the scheduling algorithm of SVR2 Unix, and it is essentially what is being done in FreeBSD today, and also at least several of the previous FreeBSD versions.

FreeBSD uses a priority feedback scheduler. Each task (thread) is assigned a priority at no lower priority task runs if there is a higher priority task that is ready to run. Among multiple tasks at the same priority, the scheduling is round robin where each task has a time quantum for which it is allowed to run, before it is preempted and the next task among those of this priority is given the CPU.

FreeBSD uses a single, fixed time quantum of 1/10 of a second.

Since no lower priority process runs if any higher priority process is ready to run, there is the danger that lower priority tasks will get no cpu usage. This is called starvation. Starvation is prevented by modifying the process's priority, introducing a dynamic priority.

The dynamic priority penalizes processes for the cpu usage, reducing the priority for each time the process runs its full quantum. The past cpu usage of a process is forgiven by a certain fraction, the faction depending on cpu load. On a loaded cpu the past cpu usage is no so quickly forgiven as on a lightly loaded cpu.

The CaneOS scheduler

A multi-level scheduler collects threads into broad priority classes. From highest to lowest priority they are: interrupt, kernel, real-time, I/O bound and compute bound interactive, and finally batch. A multi-level scheduler provide a different scheduling regime for each class, as befits the needs of the class. For instance, real-time class tasks are often run-to-completion in which there is no preemption. Buyer beware.

The Caos scheduler focuses on the interactive class, which uses priority levels with round-robin scheduling within a priority. While a good priority scheduler introduces a dynamic aspect to the priority to prevent starvation and encourage responsivity, we do not provide that in the Caos85 scheduler at this time.

The full code follows, with an example run.

Headers and defines

The beginning of the file has the includes and some defines.

The number of priority levels is set by a define. In order to avoid the special case of no thread being ready to run, a special idle thread is placed at the lowest priority, and is specially handled to always be in the ready to run state.

#include<stdlib.h>
#include<stdio.h>
#include<assert.h>
#include<unistd.h>


/*
 * simple feedback priority scheduler, 
 * for the hypothetical cane OS 85 op sys, a.k.a. caos85
 * author: burt rosenberg
 * created: october 2, 2019
 * last update:
 *
 */



#define MAX_VERBOSE 2
#ifdef IS_VERBOSE
static int is_verbose = MAX_VERBOSE ;
#else
static int is_verbose = 0 ;
#endif

#define PRIORITY_LEVELS 8

#define IDLE_PROCESS_TID 1
#define IDLE_PROCESS_PRIORITY (PRIORITY_LEVELS-1)

#define SLEEP_PROB 0x01
#define WAKEUP_PROB 0x07

Data Structures

The major data structures are the thread control blocks, the priority table, and a wait queue.

Each thread is an example of a schedulable entity. Other such entities might be an interrupt or differed procedure call, but we are just interested in user threads of the interactive class. The tcb has a numerical identify, a state, a priority, and then a place for statistics. The statistics could be used for priority feedback (dynamic priority), but in this code while the statistics are collected they are not exploited.

The priority table is an array of linked lists schedulable entities, with array index equal to priority. Each schedulable entity points to the tcb that is being scheduled. The linked lists are arranged front to back in decreasing in-priority level priority, and moving the currently running task to the end of the linked list.

Threads transition between ready to run, and waiting on an event, and therefore not ready to run. The transition to waiting occurs if the thread makes a syscall that requires a future event in order to complete. In this case, the thread is marked waiting, and is placed on a linked list of tcb's called the wait queue.

Things on the wait queue get scanned by interrupt events for waits that have been satisfied (perhaps by the actions of this very interrupt). The wait queue uses the technique of a dummy header node to simplify the code.

We simulate these things using random numbers, to provide insight into the workings and performance of our scheduler.

typedef enum { 
	THREAD_INIT, THREAD_READY, THREAD_RUNNING, THREAD_WAITING, THREAD_ZOMBIE  } 
	ThreadState ;

struct TCB {
	struct TCB * wait_next ;
	int tid ;
	ThreadState  thread_state ;
	int priority ;
	int cpu ;
	int sleeps ;
	int wakeups ;
} ;

struct SE { // schedulable entity
	struct SE * next ;
	union {
		struct TCB * tcb_p ;
	} se ;
} ;

static struct SE * priority_table[PRIORITY_LEVELS] ;
static struct TCB * current ;
static struct TCB * wait_queue ;

Core Scheduling Code

The core scheduling code selects the next ready process to run. A scan is made of the scheduler table from highest to lowest priority, scanning the linked list at each priority level from front to back, until the first ready thread is found.

There is at least the idle thread ready. Threads that run are placed to the back of their respective linked list, creating the discipline of round-robin within a priority level.

void move_to_back(int level,struct SE * target) {
	struct SE * se_p = priority_table[level] ;

/* -v */ if (is_verbose>1)
		printf("(%s,%d) move_to_back: level %d, tid %d\n",
		__FILE__, __LINE__, level, target->se.tcb_p->tid) ;

	if (se_p==target) {
		priority_table[level] = se_p->next ;
		se_p = priority_table[level] ;
	}
	if (!priority_table[level]) {
		priority_table[level] = target ;
		return ;
	}
	while (se_p->next) {
		if ( se_p->next==target )
			se_p->next = target->next ;
		else 
			se_p = se_p->next ;
	}
	target->next = NULL ;
	se_p->next = target ;
	return ;
}

struct TCB * find_next_ready_thread(){
	int i ;
	struct SE * se_p ;
	
	// walks up the priority_table to find the first ready process
	// and moves it to the back of the queue, and returns a point to the tcb.
	// the point of moving the running process to the back is to give a round-robin
	// priority within a given priority level
	
	for (i=0; i<PRIORITY_LEVELS; i++) {
		se_p = priority_table[i] ;
		if (!se_p) continue ;
		while (se_p) {
			if ( se_p->se.tcb_p->thread_state == THREAD_READY ) {
/* -v */			if (is_verbose>1)
					printf("(%s,%d) find_next_ready...: finds tid %d\n",
					__FILE__,__LINE__,se_p->se.tcb_p->tid ) ;
				move_to_back(i,se_p) ;
				return se_p->se.tcb_p ;
			}
			se_p = se_p->next ;
		}
	}
	// nothing is ready
	return NULL ;
}

Scheduler and Simulation

This code simulates the scheduling as it would occur in a running system.

When an interrupt or system call occurs, consider the proceedure scheduler being entered. The code first awards the one unit of completed cpu use to the thread. Then the code simulates whether this was a preemption event or a system call by flipping a coin.

If a preemption event, the thread stays ready to run; if a system call the thread status becomes waiting, and the tcb is added to the wait queue.

Then the code simulates the interrupt activity which would have resolved the waits of waiting tasks, moving them back to ready to run. It does this by inserting here a walk along the wait queue, flipping a coin to decide whether, over the simulated time interval, a waiting task has been moved back to ready.

That done, we now invoke the find ready code to return the thread to now make current. This is returned to the main program, as if this ends one cycle of the scheduler. The main program will call the scheduler repeatedly for each scheduling event (preemption or syscall).

void simulate_wakeup(int mask) {
	struct TCB * tcb_p = wait_queue ;

	while (tcb_p->wait_next) {
		if ((random()&mask)==0) {
/* -v */		if (is_verbose>1) printf("** wake up task %d\n",
					tcb_p->wait_next->tid ) ;
			tcb_p->wait_next->thread_state = THREAD_READY ;
			tcb_p->wait_next->wakeups ++ ;
			tcb_p->wait_next = tcb_p->wait_next->wait_next ;
		}
		else
			tcb_p = tcb_p->wait_next ;
	}
	return ; 
}

void simulate_preempt_wait(struct TCB * current, int mask) {
	if (current->tid==IDLE_PROCESS_TID)
		// idle process never sleeps
		return ;
	
	if ((random()&mask)==0) {
		current->thread_state = THREAD_WAITING ;
		current->sleeps++ ;
		current->wait_next = wait_queue->wait_next ;
		wait_queue->wait_next = current ;
	}
	return ;
}

struct TCB * scheduler(int wait_prob, int wakeup_prob) {
	// give current one quantum
	current->cpu++ ;
	simulate_preempt_wait(current,wait_prob) ;
	simulate_wakeup(wakeup_prob) ;
	return find_next_ready_thread() ;
}

Support Routines for Main

Here are various support routines for the main program: initialization and printing routines.
struct TCB * create_tcb(int priority) {
	struct TCB * tcb_p ;
	static int next_avail_tid = 1 ;
	
	tcb_p = (struct TCB *) malloc(sizeof(struct TCB)) ;
	if (!tcb_p) 
		return NULL ;
		
	tcb_p->tid = next_avail_tid++ ;
	tcb_p->priority = priority ;
	tcb_p->thread_state = THREAD_INIT ;
	tcb_p->cpu = 0 ;
	tcb_p->sleeps = 0 ;
	tcb_p->wakeups = 0 ;

	return tcb_p ;
}

int add_schedulable(struct TCB * tcb, ThreadState ts) {
	struct SE * se_p ;
	se_p = (struct SE *) malloc(sizeof(struct SE)) ;
	if (!se_p)
		return -1 ;
	
	se_p->se.tcb_p = tcb ;
	se_p->next = priority_table[tcb->priority] ;
	priority_table[tcb->priority] = se_p ;
	
	tcb->thread_state = ts ;
	return 0 ;
}


struct TCB ** create_and_schedule_threads(int priorities[], int n) {
	struct TCB * tcb ;
	struct TCB ** tcb_table ;
	tcb_table = (struct TCB **) malloc(n*sizeof(struct TCB **)) ;
	int i ;
	for (i=0; i<n; i++) {
		tcb = create_tcb(priorities[i]) ;
		assert(tcb) ; // poor person's error checking
		tcb_table[i] = tcb ;
		add_schedulable( tcb, THREAD_READY) ;
	}
	return tcb_table ; 
}

void init_wait_queue(){
	wait_queue  = (struct TCB *) malloc(sizeof(struct TCB)) ;
	wait_queue->wait_next = NULL ;
	wait_queue->tid = -1 ;
	wait_queue->priority = -1 ;
	return ;
}

void print_tcb(struct TCB * tcb) {
	printf("(tid:%d, st:%d, pr:%d)",tcb->tid, tcb->thread_state, tcb->priority) ;
	return ;
}

void print_priority_table() {
	int i ;
	struct SE * se ;
	for (i=PRIORITY_LEVELS; i>0; i--) {
		printf("[%02d]",i-1) ;
		se = priority_table[i-1] ;
		while (se) {
		  	printf(" -> ") ;
			print_tcb(se->se.tcb_p) ;
			se = se->>ext ; 
		}
		printf("\n") ;
	}
	return ;
}

void print_wait_queue_running() {
	struct TCB * tcb ;
	tcb = wait_queue ;
	printf("waiting:") ;
	while (tcb) {
		printf(" -> ") ;
		print_tcb(tcb) ;
		tcb = tcb->wait_next ;
	}
	printf("\n") ;
	printf(""current: "") ;
	print_tcb(current) ;
	printf("\n") ;
	return ;
}

Main

The main function parses command line arguments and, calls upon the support functions to initialize the data structures, and repeatedly calls the scheduler to simulate rounds of time slices (time quantum), and all the wait and wakeup events that occur each time slice.

On conclusion it prints out the thread statistics.

#define USAGE_MESSAGE "schedule -v  nrounds "

int main(int argc, char * argv[]) {
	int priorities[] = {IDLE_PROCESS_PRIORITY, 1, 3, 3, 3, 5, 6, 6 } ;
	int n_tasks ;
	struct TCB ** tcb_table ;
	int n_rounds ;
	int i ;
	int ch ;
	
	srandomdev() ;
	
	n_tasks = sizeof(priorities)/sizeof(priorities[0]) ;
	tcb_table = create_and_schedule_threads(priorities, n_tasks) ;
	current = priority_table[IDLE_PROCESS_PRIORITY]->se.tcb_p ;
	init_wait_queue() ;

	while ((ch = getopt(argc, argv, "v")) != -1) {
		switch(ch) {

		/*
		 * modify or add to these case statements
		 */

		case 'v':
			is_verbose++ ;
			break ;
		default:
				printf("%s\n", USAGE_MESSAGE) ;
				return 0 ;
		}
	}
	argc -= optind; // these are globals defined in the getopt code
	argv += optind;

	if (argc!=1) {
		printf("%s\n", USAGE_MESSAGE) ;
		return 0 ;
	}
	n_rounds = atoi(argv[0]) ;
/*-v*/	if (is_verbose)
		printf("running for %d rounds, verbose level %d\n",n_rounds,is_verbose) ;

	if (is_verbose) {
		print_priority_table() ;
		print_wait_queue_running() ;
		printf("\n") ;	
	}

	while (n_rounds--) {

		// process events and schedule the next current
		current = scheduler(SLEEP_PROB,WAKEUP_PROB) ;

		if (!current) {
			// never, as idle thread is always ready
			printf("(%s,%d) exiting\n", __FILE__, __LINE__ ) ;
			assert(0) ;
		}

		if (is_verbose) {
			print_priority_table() ;
			print_wait_queue_running() ;
			printf("\n") ;
		}
	}
	for (i=0;i<n_tasks;i++)
		printf("tid:%2d, pri:%2d, cpu:%2d, waits:%2d\n", i+1, 
			tcb_table[i]->priority, tcb_table[i]->cpu, 
			tcb_table[i]->sleeps) ; // tcb_table[i]->wakeups
	
	return 0 ;
}

Running

Because the simulations are randomized, different runs give different results. This is run with verbose level of 1 to show the priority table, the wait queue and the running process (current).
### start

./scheduler -v  15
running for 15 rounds, verbose level 1
[07] -> (tid:1, st:1, pr:7)
[06] -> (tid:8, st:1, pr:6) -> (tid:7, st:1, pr:6)
[05] -> (tid:6, st:1, pr:5)
[04]
[03] -> (tid:5, st:1, pr:3) -> (tid:4, st:1, pr:3) -> (tid:3, st:1, pr:3)
[02]
[01] -> (tid:2, st:1, pr:1)
[00]
waiting: -> (tid:-1, st:0, pr:-1)
current: (tid:1, st:1, pr:7)

[07] -> (tid:1, st:1, pr:7)
[06] -> (tid:8, st:1, pr:6) -> (tid:7, st:1, pr:6)
[05] -> (tid:6, st:1, pr:5)
[04]
[03] -> (tid:5, st:1, pr:3) -> (tid:4, st:1, pr:3) -> (tid:3, st:1, pr:3)
[02]
[01] -> (tid:2, st:1, pr:1)
[00]
waiting: -> (tid:-1, st:0, pr:-1)
current: (tid:2, st:1, pr:1)

[07] -> (tid:1, st:1, pr:7)
[06] -> (tid:8, st:1, pr:6) -> (tid:7, st:1, pr:6)
[05] -> (tid:6, st:1, pr:5)
[04]
[03] -> (tid:5, st:1, pr:3) -> (tid:4, st:1, pr:3) -> (tid:3, st:1, pr:3)
[02]
[01] -> (tid:2, st:1, pr:1)
[00]
waiting: -> (tid:-1, st:0, pr:-1)
current: (tid:2, st:1, pr:1)

[07] -> (tid:1, st:1, pr:7)
[06] -> (tid:8, st:1, pr:6) -> (tid:7, st:1, pr:6)
[05] -> (tid:6, st:1, pr:5)
[04]
[03] -> (tid:5, st:1, pr:3) -> (tid:4, st:1, pr:3) -> (tid:3, st:1, pr:3)
[02]
[01] -> (tid:2, st:1, pr:1)
[00]
waiting: -> (tid:-1, st:0, pr:-1)
current: (tid:2, st:1, pr:1)

[07] -> (tid:1, st:1, pr:7)
[06] -> (tid:8, st:1, pr:6) -> (tid:7, st:1, pr:6)
[05] -> (tid:6, st:1, pr:5)
[04]
[03] -> (tid:4, st:1, pr:3) -> (tid:3, st:1, pr:3) -> (tid:5, st:1, pr:3)
[02]
[01] -> (tid:2, st:3, pr:1)
[00]
waiting: -> (tid:-1, st:0, pr:-1) -> (tid:2, st:3, pr:1)
current: (tid:5, st:1, pr:3)

[07] -> (tid:1, st:1, pr:7)
[06] -> (tid:8, st:1, pr:6) -> (tid:7, st:1, pr:6)
[05] -> (tid:6, st:1, pr:5)
[04]
[03] -> (tid:3, st:1, pr:3) -> (tid:5, st:1, pr:3) -> (tid:4, st:1, pr:3)
[02]
[01] -> (tid:2, st:3, pr:1)
[00]
waiting: -> (tid:-1, st:0, pr:-1) -> (tid:2, st:3, pr:1)
current: (tid:4, st:1, pr:3)

[07] -> (tid:1, st:1, pr:7)
[06] -> (tid:8, st:1, pr:6) -> (tid:7, st:1, pr:6)
[05] -> (tid:6, st:1, pr:5)
[04]
[03] -> (tid:5, st:1, pr:3) -> (tid:4, st:1, pr:3) -> (tid:3, st:1, pr:3)
[02]
[01] -> (tid:2, st:3, pr:1)
[00]
waiting: -> (tid:-1, st:0, pr:-1) -> (tid:2, st:3, pr:1)
current: (tid:3, st:1, pr:3)

[07] -> (tid:1, st:1, pr:7)
[06] -> (tid:8, st:1, pr:6) -> (tid:7, st:1, pr:6)
[05] -> (tid:6, st:1, pr:5)
[04]
[03] -> (tid:4, st:1, pr:3) -> (tid:3, st:3, pr:3) -> (tid:5, st:1, pr:3)
[02]
[01] -> (tid:2, st:3, pr:1)
[00]
waiting: -> (tid:-1, st:0, pr:-1) -> (tid:3, st:3, pr:3) -> (tid:2, st:3, pr:1)
current: (tid:5, st:1, pr:3)

[07] -> (tid:1, st:1, pr:7)
[06] -> (tid:8, st:1, pr:6) -> (tid:7, st:1, pr:6)
[05] -> (tid:6, st:1, pr:5)
[04]
[03] -> (tid:3, st:3, pr:3) -> (tid:5, st:1, pr:3) -> (tid:4, st:1, pr:3)
[02]
[01] -> (tid:2, st:3, pr:1)
[00]
waiting: -> (tid:-1, st:0, pr:-1) -> (tid:3, st:3, pr:3) -> (tid:2, st:3, pr:1)
current: (tid:4, st:1, pr:3)

[07] -> (tid:1, st:1, pr:7)
[06] -> (tid:8, st:1, pr:6) -> (tid:7, st:1, pr:6)
[05] -> (tid:6, st:1, pr:5)
[04]
[03] -> (tid:3, st:3, pr:3) -> (tid:4, st:3, pr:3) -> (tid:5, st:1, pr:3)
[02]
[01] -> (tid:2, st:3, pr:1)
[00]
waiting: -> (tid:-1, st:0, pr:-1) -> (tid:4, st:3, pr:3) -> (tid:3, st:3, pr:3) -> (tid:2, st:3, pr:1)
current: (tid:5, st:1, pr:3)

[07] -> (tid:1, st:1, pr:7)
[06] -> (tid:8, st:1, pr:6) -> (tid:7, st:1, pr:6)
[05] -> (tid:6, st:1, pr:5)
[04]
[03] -> (tid:3, st:3, pr:3) -> (tid:4, st:3, pr:3) -> (tid:5, st:3, pr:3)
[02]
[01] -> (tid:2, st:3, pr:1)
[00]
waiting: -> (tid:-1, st:0, pr:-1) -> (tid:5, st:3, pr:3) -> (tid:4, st:3, pr:3) -> (tid:3, st:3, pr:3) -> (tid:2, st:3, pr:1)
current: (tid:6, st:1, pr:5)

[07] -> (tid:1, st:1, pr:7)
[06] -> (tid:7, st:1, pr:6) -> (tid:8, st:1, pr:6)
[05] -> (tid:6, st:3, pr:5)
[04]
[03] -> (tid:3, st:3, pr:3) -> (tid:4, st:3, pr:3) -> (tid:5, st:3, pr:3)
[02]
[01] -> (tid:2, st:3, pr:1)
[00]
waiting: -> (tid:-1, st:0, pr:-1) -> (tid:6, st:3, pr:5) -> (tid:5, st:3, pr:3) -> (tid:4, st:3, pr:3) -> (tid:3, st:3, pr:3) -> (tid:2, st:3, pr:1)
current: (tid:8, st:1, pr:6)

[07] -> (tid:1, st:1, pr:7)
[06] -> (tid:8, st:3, pr:6) -> (tid:7, st:1, pr:6)
[05] -> (tid:6, st:3, pr:5)
[04]
[03] -> (tid:3, st:3, pr:3) -> (tid:4, st:3, pr:3) -> (tid:5, st:3, pr:3)
[02]
[01] -> (tid:2, st:3, pr:1)
[00]
waiting: -> (tid:-1, st:0, pr:-1) -> (tid:8, st:3, pr:6) -> (tid:6, st:3, pr:5) -> (tid:5, st:3, pr:3) -> (tid:4, st:3, pr:3) -> (tid:3, st:3, pr:3) -> (tid:2, st:3, pr:1)
current: (tid:7, st:1, pr:6)

[07] -> (tid:1, st:1, pr:7)
[06] -> (tid:8, st:3, pr:6) -> (tid:7, st:1, pr:6)
[05] -> (tid:6, st:1, pr:5)
[04]
[03] -> (tid:3, st:3, pr:3) -> (tid:4, st:3, pr:3) -> (tid:5, st:3, pr:3)
[02]
[01] -> (tid:2, st:3, pr:1)
[00]
waiting: -> (tid:-1, st:0, pr:-1) -> (tid:8, st:3, pr:6) -> (tid:5, st:3, pr:3) -> (tid:4, st:3, pr:3) -> (tid:3, st:3, pr:3) -> (tid:2, st:3, pr:1)
current: (tid:6, st:1, pr:5)

[07] -> (tid:1, st:1, pr:7)
[06] -> (tid:8, st:3, pr:6) -> (tid:7, st:1, pr:6)
[05] -> (tid:6, st:3, pr:5)
[04]
[03] -> (tid:4, st:3, pr:3) -> (tid:5, st:3, pr:3) -> (tid:3, st:1, pr:3)
[02]
[01] -> (tid:2, st:3, pr:1)
[00]
waiting: -> (tid:-1, st:0, pr:-1) -> (tid:6, st:3, pr:5) -> (tid:8, st:3, pr:6) -> (tid:5, st:3, pr:3) -> (tid:4, st:3, pr:3) -> (tid:2, st:3, pr:1)
current: (tid:3, st:1, pr:3)

[07] -> (tid:1, st:1, pr:7)
[06] -> (tid:8, st:3, pr:6) -> (tid:7, st:1, pr:6)
[05] -> (tid:6, st:3, pr:5)
[04]
[03] -> (tid:4, st:3, pr:3) -> (tid:5, st:3, pr:3) -> (tid:3, st:3, pr:3)
[02]
[01] -> (tid:2, st:3, pr:1)
[00]
waiting: -> (tid:-1, st:0, pr:-1) -> (tid:3, st:3, pr:3) -> (tid:6, st:3, pr:5) -> (tid:8, st:3, pr:6) -> (tid:5, st:3, pr:3) -> (tid:4, st:3, pr:3) -> (tid:2, st:3, pr:1)
current: (tid:7, st:1, pr:6)

tid: 1, pri: 7, cpu: 1, waits: 0
tid: 2, pri: 1, cpu: 3, waits: 1
tid: 3, pri: 3, cpu: 2, waits: 2
tid: 4, pri: 3, cpu: 2, waits: 1
tid: 5, pri: 3, cpu: 3, waits: 1
tid: 6, pri: 5, cpu: 2, waits: 2
tid: 7, pri: 6, cpu: 1, waits: 0
tid: 8, pri: 6, cpu: 1, waits: 1
### end 
Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

author: burton rosenberg
created: 3 oct 2019
update: 4 oct 2019