Link Search Menu Expand Document

OSI Lab2: User-level Threading

In this lab, you will implement a user-level threading package. This means you will write code that operates entirely in user space, without relying on kernel-level thread management. Your implementation will handle tasks such as creating, destroying, and scheduling threads. The threading package will be non-preemptive, meaning that only one thread runs at a time, and threads voluntarily yield control to allow others to execute.

Section 0: Fetch lab2 from upstream repo

You should have joined the upstream repository (git@github.com:NEU-CS6640-labs/egos-upstream) as part of Lab 1. To verify that the upstream remote is configured, run:

$ git remote -v

You should see git@github.com:NEU-CS6640-labs/egos-upstream.git in the output. If it is missing, add the upstream repository by running:

$ git remote add upstream git@github.com:NEU-CS6640-labs/egos-upstream.git

We pushed the lab2 to a remote branch upstream/lab2. Note that upstream points to the git@github.com:NEU-CS6640-labs/egos-upstream. Next, you will fetch the branch and create a lab2 branch on your local repo (i.e., your private GitHub repo origin, pointing to git@github.com:NEU-CS6640-labs/egos-2k-<YOUR_ID>).

Here is a visualization of how the repos are organized: lab repo architecture

Here are steps to fetch lab2:

  • fetch lab2 branch
    $ git fetch upstream lab2
    
    <you should see:>
    ...
     * [new branch]      lab2       -> upstream/lab2
    
  • create a local lab2 branch
    $ git checkout --no-track -b lab2 upstream/lab2
    <you should see:>
    Switched to a new branch 'lab2'
    
  • confirm you’re on local branch lab2
    $ git status
    <you should see:>
    On branch lab2
    nothing to commit, working tree clean
    
  • push branch lab2 to remote repo origin
    $ git push -u origin lab2
    
  • In the following, you should work on the lab2 branch and push to the origin’s lab2 branch to submit your work. The OSI staff will grade your origin:lab2 branch, so if you have any updates from Lab 1 that you want to keep, you should merge them into the lab2 branch.
  • If you encounter merge conflicts, you must resolve them manually. Please read the instructions on how to resolve conflicts: here.

Section 1: User-level threading design

HOWTO: Start by carefully reading all instructions to gain a basic understanding of the lab tasks, rather than jumping directly into coding. Revisit the instructions as needed throughout the lab when you encounter difficulties.

Preview

The core of implementing your user-level threading is the following four functions in apps/user/ult.c:

// initialize the threading package
void thread_init();

// create a new thread with a given stack size
void thread_create(void (*f)(void *arg), void *arg,
                       unsigned int stack_size);

// yield to another thread (if any)
void thread_yield();

// exit the current thread
void thread_exit();

There will be detailed descriptions of the expected functionality and suggestions for how to implement it. Before that, you should understand and decide on the overall design of the threading package.

In this lab, you should limit your code and modifications within the two files, apps/user/ult.c and library/queue.h.

Thread control block

A thread is a basic unit of execution. It occupies the CPU and runs its own functions until it invokes thread_yield(). Each thread is associated with a thread control block (TCB), which stores the state and management information for the thread.

In Lab 2, the TCB is defined as struct thread in apps/user/ult.c. You will decide what information to store in the thread control block. Typically, it includes the following:

  • Thread ID: A unique identifier for the thread.
  • Status: The current execution state of the thread.
  • Function pointer: The entry point (or “main”) function of the thread.
  • Function arguments: The arguments passed to the thread’s main function.
  • Stack pointer: The address of the top of the thread’s execution stack.
  • Program counter (*): The address of the next instruction to execute.
  • Registers (*): General-purpose registers holding the thread’s execution state.

The skeleton code provided for context switching already saves and restores the starred items (the program counter and registers), so you do not need to manage them manually. You will see the details of how registers are saved later in the lab.

Exercise 1 Design your thread control block

  • Complete the definition of struct thread in apps/user/ult.c.
  • Implement a mechanism to generate unique thread IDs.
  • Define the possible thread states (e.g., ready, running, terminated).
  • The function pointer and its argument are provided by thread_create(). Refer to its function signature to ensure the correct types.
  • Define the stack pointer with type void *.
  • Feel free to add additional metadata to support debugging or thread management.

Notes:

  • You will likely revisit and revise the thread control block as the implementation progresses and additional information becomes necessary.
  • Writing a helper function to initialize thread control blocks may simplify development and help ensure that all attributes (including those added later) are properly initialized.
  • Each thread will allocate memory using malloc. Tracking this allocated memory in the thread control block will simplify thread destruction, as you will need to free it.

Thread context switch

Each thread has its own execution context. A thread context represents the state of a thread at a particular moment in time and includes all information required by the threading package to resume execution.

In egos-2k+, a thread context consists of the general-purpose registers, the stack, and the program counter (pc), which holds the address of the next instruction to execute.

Next, we describe how egos-2k+ performs context switching in user space:

(a) a helper function, ctx_switch()

A thread context switch suspends the currently running thread and transfers execution to another thread. For example, switching from thread t1 to thread t2 involves the following steps:

  1. Save the registers and return address (from the call to ctx_switch()) of t1.
  2. Switch the stack from t1 to t2.
  3. Restore the registers of t2.
  4. Return to the point where t2 last yielded.
  5. Resume execution of t2.

The above steps will be handled by a helper function we provide, named ctx_switch:

void ctx_switch(void** old_sp, void* new_sp);

ctx_switch() pushes the registers (of the current thread) on stack and then saves the current stack pointer in *old_sp. It then sets the stack pointer to new_sp and pops the registers previously pushed. You can find ctx_switch’s implementation in apps/user/thread.s.

(b) another helper function, ctx_start()

The ctx_switch() function switches between two runnable threads, but a separate mechanism is needed to create a thread’s context initially.

To support context creation, we provide ctx_start:

void ctx_start(void** old_sp, void* new_sp);

Unlike ctx_switch(), ctx_start() does not restore the registers of a previously running thread, since no such thread exists. Instead, it transfers control to ctx_entry(), a function that you will implement.

Exercise 2 Understand ctx_start()/ctx_switch()

  • Read the implementations of ctx_switch() and ctx_start() in apps/user/thread.s.
  • Refer to RISC-V registers on the reference page to understand the register set.
  • Refer to RISC-V instruction listings on the reference page to understand the instructions.
  • Later, you will use ctx_start() to initialize a brand-new thread with an empty stack, and use ctx_switch() to switch between existing threads.

Design wrap-up and scheduling threads

A brief summary

Here is a summary of thread execution. When creating a thread, allocate a thread control block (struct thread) and a stack using malloc(). The thread control block stores per-thread information, such as the execution state and stack pointer. Use ctx_start() to run the newly created thread for the first time. When switching to another thread, update the thread control blocks of both the current and next threads, then invoke ctx_switch() to switch their contexts. When a thread exits, ensure that the threading package never resumes its execution and that all memory allocated for the thread is freed.

Scheduling threads

Your threading package should use a simple FIFO scheduling algorithm. All runnable threads should be placed in a queue, and each time a thread yields the CPU, the thread at the head of the queue should run next. To implement this policy, you need a queue:

Exercise 3 Implement a queue

  • Implement enqueue() and dequeue() in library/queue.h.
  • We provide skeleton code for one possible implementation of a queue. You are free to discard it and use your own design.

Notes:

  • Test your queue implementation thoroughly!
    Bugs in the queue will make debugging the user-level threading system extremely difficult.
  • You can treat the queue implementation as standard C code; you do not need to test it within egos.
  • This queue implementation will be reused in future labs.
  • In addition to the runnable thread queue, you may need a separate queue for terminated threads. This will become clearer when you implement thread_exit().

Section 2: Implementing core functions

In this section, you will implement the four threading interfaces:

  • thread_init()
  • thread_create()
  • thread_exit()
  • thread_yield()

and their corresponding helper functions.

Thread package initialization and thread creation

  1. thread_init() initializes all data structures in the threading package. In addition, it creates a data structure representing the current thread.

  2. thread_create(void (*f)(void *arg), void *arg, unsigned int stack_size) creates a new thread. The thread executes function f with a single argument arg. If f returns, the thread should be cleaned up as if it had invoked thread_exit(). During the execution of thread_create(), your code should yield the CPU to the newly created thread.

    For example, thread_create(consumer, "consumer 1", 16 * 1024) creates a new thread with a 16 KB stack that invokes consumer("consumer 1").

Exercise 4 Implement thread_init, thread_create, and ctx_entry

  • Implement thread_init().
    • Remember to create a thread control block for the current thread (i.e., the main thread).
  • Implement thread_create().
    • Allocate a stack and a thread control block for the new thread.
    • Run the new thread immediately for the first time using ctx_start().
    • This will invoke ctx_entry(). Before doing so, place the current thread on the runnable thread queue and treat the newly created thread as the current thread.
    • Hint: The stack grows downward, so assign the correct address to the stack pointer; it is not the address returned by malloc().
  • Implement ctx_entry().
    • Invoke the thread’s “main” function: f(arg).
    • For now, if ctx_entry() returns directly, a kernel exception will occur.
      • This happens because return pops the current stack frame and attempts to return to a previous frame, but the current thread has no previous stack frame. Execution reaches ctx_entry() via ctx_start(), which is not a normal function call.
    • Instead of returning, explicitly call thread_exit().
  • After implementing these functions, run your code:
    $ make qemu
    ...
    ➜ /home/cs6640 % ult a   # run "ult a" in the egos shell
    ...
    ➜ /home/cs6640 % ult b
    
    • You should pass ult a and ult b.
    • “Pass” means that the expected strings appear on the screen (see the comments in apps/user/ult_test.c for details). At this stage, your code may still exit or crash at the end, since not all components have been implemented yet.

Thread exiting and yielding

  1. thread_exit() indicates that a thread has finished execution. This function must never return. Instead, the thread should be cleaned up and another runnable thread—if any—should resume execution. Note that any function may call thread_exit() to terminate the current thread.

  2. thread_yield() indicates that the current thread voluntarily yields the CPU to another runnable thread. If no other runnable thread exists, thread_yield() is a no-op and immediately returns to the current thread.

    The threading package gains control only when the current thread invokes one of the API functions. There is no “main loop”, so all scheduling decisions must be made when the threading package gains control.

Exercise 5 Implement thread_exit and thread_yield

  • Implement thread_exit().
    • Cleanup involves several subtleties. When a thread exits, it cannot clean up its own resources, since doing so would destroy its stack and prevent it from switching to the next runnable thread.
    • Instead, the exiting thread should switch to another runnable thread and let that thread perform the cleanup.
    • During cleanup, free the thread control block, the thread stack, and any additional thread metadata, if allocated.
    • If the main() thread reaches return 0, the entire process terminates, even if other threads are still runnable. This behavior follows from the design of user-level threads: the OS is unaware of user-level threading, so process termination ends all threads.
      To avoid this, ensure that the main thread calls thread_exit() before reaching return 0 (as done in the provided test cases).
  • Implement thread_yield().
    • Use a simple FIFO scheduling policy.
    • Track the currently running thread explicitly. Typically, the current thread should not appear in the runnable queue, which should contain only threads that are runnable but not currently executing.
    • Note: Be careful when using printf(), as your implementation of printf may introduce unexpected issues. If you observe discrepancies between runs with and without printf, consider debugging with gdb or using simple format specifiers such as %s, %d, or %x.
  • After completing this exercise, your implementation should pass all test cases (ult [a..f]).

Finally, submit your work

Submitting consists of three steps:

  1. Executing this checklist:
    • Fill in ~/osi/egos/slack/lab2.txt.
    • Make sure that your code build with no warnings.
  2. Push your code to GitHub:

     $ cd ~/osi/egos/
     $ git commit -am 'submit lab2'
     $ git push origin lab2
    
     Counting objects: ...
      ....
      To github.com/NEU-CS6640-labs/egos-2k-<YOUR_ID>.git
         c1c38e6..59c0c6e  lab2 -> lab2
    
  3. Actually commit your lab (with timestamp and git commit id):

    1. Get the git commit id of your work. A commit id is a 40-character hexadecimal string. You can obtain the commit id for the last commit by running the command git log -1 --format=oneline.

    2. Submit a file named git.txt to Canvas. (there will be an assignment for this lab on Canvas.) The file git.txt contains two lines: the first line is your github repo url; the second line is the git commit id that you want us to grade. Here is an example:
       git@github.com:NEU-CS6640-labs/egos-<YOUR_ID>.git
       29dfdadeadbeefe33421f242b5dd8312732fd3c9
      

      Notice: the repo address must start with git@github.com:... (not https://...). You can get your repo address on GitHub repo page by clicking the green “Code” button, then choose “SSH”.

    3. Note: You can submit as many times as you want; we will grade the last commit id submitted to Canvas. Also, you can submit any commit id in your pushed git history; again, we will grade the commit id submitted to Canvas.
      Notice: if you submit multiple times, the file name (git.txt) changes to git-N.txt where N is an integer and represents how many times you’ve submitted. We will grade the file with the largest N.

NOTE: Ground truth is what and when you submitted to Canvas.

  • A non-existent repo address or a non-existent commit id in Canvas means that you have not submitted the lab, regardless of what you have pushed to GitHub—we will not grade it. So, please double check your submitted repo and commit id!

  • The time of your submission for the purposes of tracking lateness is the timestamp on Canvas, not the timestamp on GitHub.

This completes the lab.

Acknowledgments

This lab is originally designed by Robbert van Renesse and is tailored by OSI staff. A large part of the lab instructions is also borrowed from his project description.