Skip to content

Timers

Whilst working on the filesystem code I realized that the timer code definitely need improving. As presented the system only has a single timer; this obviously causes problems if more than one task calls sys_Sleep(). I hadn’t really thought about how to implement multiple timers, but it suddenly occurred to me (whilst I was out walking the dog on the South Downs) that it was really simple. By adding another field to the task structure to hold a timer count, the job is practically done! During the timer interrupt it’s just a question of checking which tasks are waiting on a timer, decrementing the timer field in the task structure and, if it’s reached zero, unblocking the task. Really simple! Things sometimes work that way.

Now, back to the filesystem.

File System

Time to enhance the file system, I think. Currently it can only read from the disk, and doesn’t know about subdirectories. First step will be to add write access. The IDE part of this should be fairly straightforward, but the FAT part is possibly a little more complicated. We need to be able to create directory entries and manage the FAT itself.

This is going to mean accessing the FAT itself quite a bit, so it’s going to be necessary to buffer it; memory accesses are obviously preferable to a lot of disk accesses. One possibility is just to keep the whole FAT in memory as an array; this does seem a little wasteful on memory, and makes it more difficult to ascertain which sectors of the FAT need to be rewritten to disk when changes are made (easy if the FAT is rewritten for each change, but more difficult is we want to do these updates in batches). Another possibility is to have an array of pointers to structures containing each sector of the FAT and information about whether the sector has changed since it was read from disk. This would be much more efficient memory-wise, as it would only be necessary to create structures for FAT sectors that have been read, but complicates access to the FAT slightly. I’m not quite sure which is more important – memory space or efficient access. But I do prefer the ellegance of the sparse-memory approach (the second possibility). Also, it’s a little more complicated to program – but as the whole point of this exercise is to have fun, that’s an advantage not a disadvantage!

Latest Version

I’ve now uploaded, and documented, the current version. Let me know if there are any problems with the code or the site.

Task Stack Allocation

I made another fairly minor change today. Previously the kernel and user stacks (I’m not quite clear in my mind why we have two stacks, or if the kernel one is ever used – must read the AMD documentation to find out more about this) were being allocated by the tasks themselves. It was just easier, initially, to do things that way. Now I’ve moved the code for this to the task-creation routines. This makes the code cleaner, prevents possible multiple use of the same stack, and makes the tasks themselves easier to write.

The code now is quite a lot cleaner and more logical than that documented on the main site, so I shall make an effort to document it and update things.

Low Priority Taks

I’ve now implemented two tasking queues (actually three). There is a queue of runnable tasks, one of blocked tasks, and a low-priority queue. Actually, the last is not a real queue but just a pointer to a single task, which is guaranteed always to be runnable. The task-switch code will only pick this low-priority task if no other task is ready to run. The separation into queues of runnable and non-runnable tasks makes it slightly easier, and more efficient, to find the next runnable task.

The work involved in doing this took longer than I expected; it’s amazing how many unforeseen bugs turn up when making such a change. The main things that I now realize I have to be a bit more careful about are checking that any registers that may be altered by a routine are preserved, and making sure that any variables are initialized before they are used. Both these may seem obvious, but it is very easy to overlook things when calling a C routine from assembler code.

What becomes more and more obvious when trying to find these sort of bugs is just how useful it is being able to run the Operating System within a debugger (SimNow). I can’t imagine how people found these sort of errors without debuggers. I think the next move is to carefully review the current code with these points in view before looking to implement more features. Once I’ve done that I’ll post the new version and document it on the main site.

Problems With Tasking

I’ve been doing some work recently converting the filesystem to run as a separate kernel task. (Apart from fitting in with the general design philosophy, this makes the control and scheduling of access to the disk much easier.)  Once I’d sorted out some memory allocation errors this runs fine, but it did highlight a subtle bug. The current tasking code searches through the list of tasks to find the next ready one, and switches to it. If there is no other ready task, it tests to see if the current task is ready to run; if not it issues a “hlt” instruction. This stops the processor until it receives an interrupt. The theory is that this saves the processor from running when it doesn’t need to.

Unfortunately, there is a problem when there are no ready tasks at all (which, in practice, happens quite a lot). The processor will halt in the timer interrupt routine, and then be awakened by the next interrupt. If this next interrupt is a timer one, we get nested interrupts – let this happen long enough (not very long!) and we end up with an ever increasing stack and an eventual triple fault. The problem is that the timer interrupts are never returning.

A very simple fix to this problem, which I have implemented, is to end the intitial task (”tas1.c”) with an infinite loop. Thus there is always at least one task ready to run at all times. But this isn’t very elegant, or very efficient.That initial task will always be taking a share of the processor time, even when there are plenty of useful tasks running. It seems to me that the best solution is to have two task lists – a high priority one and a low-priority one. Tasks on the low-priority list will only get to run if there are no ready tasks on the high-priority list. Thus we can create a low-priority task that is simply an infinite loop and always have a low-priotity task ready to run.

This still isn’t very elegant (although the idea of multiple levels of priority is obviously a good one). There must be a more direct solution to the initial bug – it just needs a little thinking about! Also, it occurs to me that the task lists should only contain the tasks that are ready to run. This would speed up the search for the next ready task. Something further to think about.

Introduction

I’m going to use this blog to describe my thoughts and work on the evolution of IanOS. I thought that it might be useful to see the changes that take place and the bugs that crop up.