This lab will be done with your forever CS45 lab partner:
Lab partners and machine assignments
The semantics of event_wait are that is always blocks the calling process. The semantics of event_signal are that it unblocks all processes waiting on the Event.
You will implement the data structure(s) for representing Events at the kernel-level, and you will add system calls for user-level processes to event_open, event_close, event_wait, and event_signal on Events. In addition, you will need to write user-level programs to test your Event primitives.
An individual process can have multiple Events open at a time (and can wait and signal on any of its open Events). Your Event implementation, however, must make sure that only processes that open an Event can wait, signal, or close the Event.
I strongly encourage you to get an early start on
this project; your solution will require a fair amount of thinking
time, linux code reading time, and code design and debugging
time.
Start by setting up a git repo for your lab3 work. Make sure to add three users to your git repo: you, your partner, and your shared CS45 account.
Both you and your partner should then clone a copy into your private cs45/labs subdirectory. here are the instructions for setting up a git repo from lab1.
Then one of you or your partner can copy over some starting point code into your repo, add and push it. I included an example gitignore file with the starting point code. First move it to .gitignore before adding and pushing it (and add in any names of executable files or other files you do not want git to add to your repo).
$ cp ~newhall/public/cs45/lab03/* . $ ls Makefile README event.c gitignore tester.c $ mv gitignore .gitignore $ git add .gitignore $ git add * $ git commit $ git push origin masterThe starting point code contains files with most of the #includes that you will need, and includes them in the correct order (there are some dependencies on include order of some some kernel header files, so be careful if you add more #includes ). It also includes a Makefile and starting point for a user-level test program (mostly some guesses at #includes are given to you).
Implement your Event Table data structure as a static global variable of a fixed-sized, and make its size small (~5) so that you can test that you are handling the case when a process requests to open a new Event, but there are already 5 active Events (each has at least one open). Calls to event_open in this case should return an error indicating that no more Events can be created. When an Event is closed for the last time, it is no longer an active Event, and space for it in the Event Table can be used by a subsequent newly created Event.
// NOTE: this is NOT a system call it is a function called by // the kernel during its boot sequence to initialize state // you need for your event implementation static int __init event_init() { // code to initialize Event data structures and other state } // to register the event_init function to be called during boot sequence: pure_initcall(event_init);
kmalloc and kfree are kernel-level routines for dynamic
allocation of kernel memory. Use GFP_KERNEL
as the priority value
passed to kmalloc
.
Memory leaks in the kernel are very, very
bad; make sure that if you add a call kmalloc you add a corresponding call
to kfree somewhere. Also, it is
good programming practice to set a variable's value to NULL after freeing
the space to which it points: free(ptr); ptr = 0;
.
return -ESRCH;
).
Remember, that you need to verify that values passed by reference to
your system calls refer to readable/writable memory before you
read/write to it (see project 2 if you forgot how to do this).
A process can only open the same event 1 time.
You will use wait_queue_head_t data structure(s) to block processes that call event_wait on an Event.
To block a process on an Event, you need to:
To wake-up a process on an Event, you need to:
wait.h
that calls function in sched.c
)
to do much of this for you, but make sure you understand what this
function is doing...just use the wake_up functions don't try to write your own.
Your solution should NOT make calls to the higher level functions
interruptible_sleep_on, or sleep_on
. Instead, you will
implement a solution that explicitly adds a process to a waitqueue
and blocks it using the 3 steps listed above.
You may, however, use wake_up
to
wake up all waiters on a wait queue.
The wait queue data structure is a bit tricky to figure out. I suggest looking at existing code the initializes, adds and removes items from a wait queue, and draw some pictures and step through the code for init_waitqueue_head, init_waitqueue_item, add_wait_queue, and remove_wait_queue on your picture.
To iterate through the elements on a wait queue, you can use the list_for_each_safe macros, and the list_entry macro to get a pointer to the next wait_queue_t element in the list. Here are some example code fragments using wait_queues (note: there are other ways to declare and initialize wait queues, and how you do so depends on the context in which you are using them, this is just one example):
// declare and init a wait Q static DECLARE_WAIT_QUEUE_HEAD(my_wait_Q); // declare and initialize a wait_queue_t element named "wait_entry" struct task_struct *curtask = get_current(); DECLARE_WAITQUEUE(wait_entry, curtask); // there are functions to add and remove elements from a wait Q: add_wait_queue(&my_wait_Q, &wait_entry); remove_wait_queue(&my_wait_Q, &wait_entry); // you can traverse the elements of a wait queue using the underlying // list implementation used by the wait queue (see wait.h and list.h ) struct list_head *tmp, *next; list_for_each_safe(tmp, next, &(my_wait_Q.task_list)) { wait_queue_t *curr = list_entry(tmp, wait_queue_t, task_list); ... }The best way to figure out how to use wait queues is to look at kernel code that uses them (and look for different types of uses), and look through the wait.h file for macros, type defs, and functions you can use. As a rule of thumb, avoid using anything starting with an underscore.
static DECLARE_MUTEX(event_mutex); // declare a MUTEX semaphore initialized to 1 static DECLARE_MUTEX_LOCKED(event_mutex); // declare a MUTEX semaphore initialized to 0 if(down_interruptible(&event_mutex)) { // process woke up due to signal...semaphore was NOT acquired } // Critical Section Code (i.e. access shared data structures) // make sure this process doesn't block while it holds the semaphore up(&event_mutex)) {You may not, however, use semaphores as the blocking and unblocking machanism for blocking and ublocking processes that call event_wait and event_signal.
Your system calls should return error values as they did in the previous
assignment (for example, return -ESRCH
). It is fine to use
existing error values defined in error.h
that are close to but
perhaps not exactly what you want for your errors. You may add new error
codes if you'd like, but it is not required. User level
programs should call perror
to print out an error message
whenever a call to one of your system calls returns an error.
As you design your solution, think about what a process is (or can be) doing
while another process is modifying shared state associated with an Event.
For example: is it possible for process i to do a wait on Event j, process k
to do a signal on Event j, and process i to do another wait on Event j before
it has been removed from Event j's wait queue (add itself to the wait Q more
than one time)? can a process exit while it is on an Event wait queue?
when does a process' call to schedule()
return (what is true
about the process at this point)?
I suggest incrementally implementing and testing functionality. Start with event_init, then event_open and test that a process can successfully open several Events, and that you are correctly detecting error conditions. As you go, implement partally functionality of event_print to print out information about the Event functionality you have implemented.
You will likely want to add a fair amount of debugging code as you
implement and test. The output from printk
statements
is written to files: /var/log/messages and /var/log/kern.log
.
It can also be echoed to the console by sudo dmesg -n 7
Use good modular design...remember you can have regular kernel-level
functions that your system calls call.
When you call system calls from your test program, get and check the return value. If the return value is -1, you should call perror to print out an error specific message.
Again, use information in /proc to help you debug your
program's correctness.
Create a single tar file with the following and submit it using cs45handin:
Files we added -------------- # implementation of my new sys calls /usr/src/linux/kernel/mynewfile.c ... Files we modified ----------------- # added a call to my function foo in function blah: /usr/src/linux/kernel/existinfile.c ...
@garlic:/local/me_and_pal/linux-headers-2.6.32.44-lab3_1.0_amd64.deb @garlic:/local/me_and_pal/linux-image-2.6.32.44-lab3_1.0_amd64.debImmediately before, or after, running cs45handin, rebuild the kernel_image and kernel_header packages (the Option 1 build) so that they contain your latest lab3 updates (in case you have been doing a bzImage build). Then DO NOT MODIFY AFTER THE DUE DATE.
I recommend writing a simple, generic, interactive test program for your demo. It would provide a menu of system call options, read in the user's next option (and any additional input needed for that option), and then makes a call the user's selected system call. After each such selection, you can then show me the results of that particular system call by using information from /proc, ps, top, etc. or by calling print_event_table. This is as opposed to writing a test program that contains all the calls you want to demo and just streams through them. The idea is to be able to stop after each system call and show me what happened as a result of that call, demonstrating that the call did what it is supposed to do. And, then show me the results of another system call, and so on.
A simple test program with a menu of system call options that a user can select any one after another, can be used to show any sequence of system calls: it should be easy to write and extremely flexible for showing all kinds of different scenerios.
The print_event_table system call will be useful for showing the
effects of the other system calls. However, you may also want to run
a version of your kernel with some additional debugging printks if it
helps you to demonstrate that your solution is really blocking processes
when it should be and really waking them up when it should be or any other
effects of operations on Events.