The semantics of event_wait are that is always blocks the calling process. The semantics of event_signal are that it unblocks all processes waiting on the Event.
You will implement the data structure(s) for representing Events at the kernel-level, and you will add system calls for user-level processes to event_open, event_close, event_wait, and event_signal on Events. In addition, you will need to write user-level programs to test your Event primitives.
An individual process can have multiple Events open at a time (and can wait and signal on any of its open Events). Your Event implementation, however, must make sure that only processes that open an Event can wait, signal, or close the Event.
I strongly encourage you to get an early start on this project; your solution will require a fair amount of thinking time, linux code reading time, and code design and debugging time.
To implement support for Event primitives, you need to create kernel data structure(s) to keep track of Events that have been created (i.e. an Event table), to keep track of which processes currently have the Event open, to keep track of which processes are currently blocked on an Event, etc. Typically, the kernel uses fixed-sized data structures to keep track of instances of abstractions that it implements. For example, there is an open file table to keep track of all currently open files in the system. Because these data structures are fixed-sized, there is an upper bound on the number of instances that can currently exist in the system (e.g. there is a maximum number of open files). Implement your Event Table data structure so that it is fixed-sized, and make its size small (~5) so that you can test that you are handling the case when a process requests to open a new Event, but there are already 5 active Events (each has at least one open). When an Event is closed for the last time, it is no longer an active Event, and space for it in the Event Table can be used by a subsequent newly created Event.
event_init
routine to the kernel,
and then add a call to this routine in the kernel's initialization code
that is in start_kernel() in init/main.c.
Your initialization function should have the following prototype:
#include <linux/init.h> void __init event_init(); // this is NOT a system call
GFP_KERNEL
as the priority value
passed to kmalloc
.
Memory leaks in the kernel are very, very
bad; make sure that if you add a call kmalloc you add a corresponding call
to kfree somewhere. Also, it is
good programming practice to set a variable's value to NULL after freeing
the space to which it points: free(ptr); ptr = 0;
.
return -ESRCH;
).
Remember, that you need to verify that values passed by reference to
your system calls refer to readable/writable memory before you
read/write to it (see project 2 if you forgot how to do this).
A process can only open the same event 1 time.
In addition, you may want to implement a print_event_table function that can be called from within your system calls to help with debugging and testing.
You will use wait_queue_head_t data structure(s) to block processes that have done a wait on an Event.
To block a process on an Event, you need to:
To wake-up a process on an Event, you need to:
wait.h
that calls function in sched.c
)
to do much of this for you, but make sure you understand what this
function is doing.
Your solution should NOT make calls to the higher level functions
interruptible_sleep_on, or sleep_on
. Instead, you will
implement a solution that explicitly adds a process to a waitqueue
and blocks it using the 3 steps listed above.
You may, however, use wake_up
to
wake up all waiters on a wait queue.
The wait queue data structure is a bit tricky to figure out. I suggest looking at existing code the initializes, adds and removes items from a wait queue, and draw some pictures and step through the code for init_waitqueue_head, init_waitqueue_item, add_wait_queue, and remove_wait_queue on your picture.
To iterate through the elements on a wait queue, you can use the list_for_each_safe macros, and the list_entry macro to get a pointer to the next wait_queue_t element in the list. Here are some example code fragments using wait_queues:
// declare and init a wait Q static DECLARE_WAIT_QUEUE_HEAD(my_wait_Q); // declare and initialize a wait_queue_t element named "wait_entry" struct task_struct *curtask = get_current(); DECLARE_WAITQUEUE(wait_entry, curtask); // there are functions to add and remove elements from a wait Q: add_wait_queue(&my_wait_Q, &wait_entry); remove_wait_queue(&my_wait_Q, &wait_entry); // you can traverse the elements of a wait queue using the underlying // list implementation used by the wait queue (see wait.h and list.h ) struct list_head *tmp, *next; list_for_each_safe(tmp, next, &(my_wait_Q.task_list)) { wait_queue_t *curr = list_entry(tmp, wait_queue_t, task_list); ... }
static DECLARE_MUTEX(event_mutex); // declare a MUTEX semaphore initialized to 1 static DECLARE_MUTEX_LOCKED(event_mutex); // declare a MUTEX semaphore initialized to 0 if(down_interruptible(&event_mutex)) { // process woke up due to signal...semaphore was NOT acquired } // Critical Section Code (i.e. access shared data structures) // make sure this process doesn't block while it holds the semaphore up(&event_mutex)) {
Your system calls should return error values as they did in the previous
assignment (for example, return -ESRCH
). It is fine to use
existing error values defined in error.h
that are close to but
perhaps not exactly what you want for your errors. You may add new error
codes if you'd like (using existing ones is fine though). User level
programs should call perror
to print an error message
when an error value that is returned by your system calls.
As you design your solution, think about what a process is (or can be) doing
while another process is modifying shared state associated with an Event.
For example: is it possible for process i to do a wait on Event j, process k
to do a signal on Event j, and process i to do another wait on Event j before
it has been removed from Event j's wait queue (add itself to the wait Q more
than one time)? can a process exit while it is on an Event wait queue?
when does a process' call to schedule()
return (what is true
about the process at this point)?
I suggest incrementally implementing and testing functionality. Start with
event_init, then event_open and test that a process can successfully open
several Events, and that you are correctly detecting error conditions.
You will likely want to add a fair amount of debugging code as you
implement and test. The output from printk
statements
are also written to files: /var/log/messages and /var/log/kern.log
.
Use good modular design...remember you can have regular kernel-level functions that your system calls call.
You should build a new kernel package for your lab3 kernel image (use a new --append-to-version flag for this project). You could start with your lab 2 kernel and add in lab 3 functionality or start with a new copy of the kernel source and add lab 3 functionality to that.
When you call system calls from your test program, get and check the return value. If the return value is -1, you should call perror to print out an error specific message.
Again, use information in /proc to help you debug your program's correctness.
Files I added ------------- /usr/src/linux/kernel/mynewfile.c # implementation of my new sys calls ... Files I modified ---------------- /usr/src/linux/kernel/existingkernelfile.c # added a call to mynewfunc in function blah ...
/*************** START my changes for project 3 ***************/ your code /*************** END my changes for project 3 ***************/
You should write an interactive test program that provides a menu of options, the user selects the next option, and then you can show the results of that action. This as opposed to writing a test program that contains all the calls you want to demo and just streams through them. The idea is to be able to stop after each system call and show me what happened as a result of that call and demonstrate that the call did what it is supposed to do.
Additionally, you may want to run a version of your kernel with debugging output that prints out the state of your Event data structures and prints out process state as wait, signal, open and close operations are executed on Events. This may help you to demonstrate that your solution is really blocking processes when it should be and really waking them up when it should be.