If our objects contained more information, it might not be sufficient to copy the information in and out: other parts of the code might want to keep pointers to these objects, for example, rather than looking up the id every time. This produces two problems.
The first problem is that we use the cache_lock to protect objects: we'd need to make this non-static so the rest of the code can use it. This makes locking trickier, as it is no longer all in one place.
The second problem is the lifetime problem: if another structure keeps
a pointer to an object, it presumably expects that pointer to remain
valid. Unfortunately, this is only guaranteed while you hold the
lock, otherwise someone might call cache_delete
and even worse, add another object, re-using the same address.
As there is only one lock, you can't hold it forever: no-one else would get any work done.
The solution to this problem is to use a reference count: everyone who has a pointer to the object increases it when they first get the object, and drops the reference count when they're finished with it. Whoever drops it to zero knows it is unused, and can actually delete it.
Here is the code:
--- cache.c.interrupt 2003-12-09 14:25:43.000000000 +1100 +++ cache.c.refcnt 2003-12-09 14:33:05.000000000 +1100 @@ -7,6 +7,7 @@ struct object { struct list_head list; + unsigned int refcnt; int id; char name[32]; int popularity; @@ -17,6 +18,35 @@ static unsigned int cache_num = 0; #define MAX_CACHE_SIZE 10 +static void __object_put(struct object *obj) +{ + if (--obj->refcnt == 0) + kfree(obj); +} + +static void __object_get(struct object *obj) +{ + obj->refcnt++; +} + +void object_put(struct object *obj) +{ + unsigned long flags; + + spin_lock_irqsave(&cache_lock, flags); + __object_put(obj); + spin_unlock_irqrestore(&cache_lock, flags); +} + +void object_get(struct object *obj) +{ + unsigned long flags; + + spin_lock_irqsave(&cache_lock, flags); + __object_get(obj); + spin_unlock_irqrestore(&cache_lock, flags); +} + /* Must be holding cache_lock */ static struct object *__cache_find(int id) { @@ -35,6 +65,7 @@ { BUG_ON(!obj); list_del(&obj->list); + __object_put(obj); cache_num--; } @@ -63,6 +94,7 @@ strlcpy(obj->name, name, sizeof(obj->name)); obj->id = id; obj->popularity = 0; + obj->refcnt = 1; /* The cache holds a reference */ spin_lock_irqsave(&cache_lock, flags); __cache_add(obj); @@ -79,18 +111,15 @@ spin_unlock_irqrestore(&cache_lock, flags); } -int cache_find(int id, char *name) +struct object *cache_find(int id) { struct object *obj; - int ret = -ENOENT; unsigned long flags; spin_lock_irqsave(&cache_lock, flags); obj = __cache_find(id); - if (obj) { - ret = 0; - strcpy(name, obj->name); - } + if (obj) + __object_get(obj); spin_unlock_irqrestore(&cache_lock, flags); - return ret; + return obj; }
We encapsulate the reference counting in the standard 'get' and 'put'
functions. Now we can return the object itself from
cache_find
which has the advantage that the user
can now sleep holding the object (eg. to
copy_to_user
to name to userspace).
The other point to note is that I said a reference should be held for every pointer to the object: thus the reference count is 1 when first inserted into the cache. In some versions the framework does not hold a reference count, but they are more complicated.
In practice, atomic_t would usually be used for
refcnt
. There are a number of atomic
operations defined in
include/asm/atomic.h: these are
guaranteed to be seen atomically from all CPUs in the system, so no
lock is required. In this case, it is simpler than using spinlocks,
although for anything non-trivial using spinlocks is clearer. The
atomic_inc
and
atomic_dec_and_test
are used instead of the
standard increment and decrement operators, and the lock is no longer
used to protect the reference count itself.
--- cache.c.refcnt 2003-12-09 15:00:35.000000000 +1100 +++ cache.c.refcnt-atomic 2003-12-11 15:49:42.000000000 +1100 @@ -7,7 +7,7 @@ struct object { struct list_head list; - unsigned int refcnt; + atomic_t refcnt; int id; char name[32]; int popularity; @@ -18,33 +18,15 @@ static unsigned int cache_num = 0; #define MAX_CACHE_SIZE 10 -static void __object_put(struct object *obj) -{ - if (--obj->refcnt == 0) - kfree(obj); -} - -static void __object_get(struct object *obj) -{ - obj->refcnt++; -} - void object_put(struct object *obj) { - unsigned long flags; - - spin_lock_irqsave(&cache_lock, flags); - __object_put(obj); - spin_unlock_irqrestore(&cache_lock, flags); + if (atomic_dec_and_test(&obj->refcnt)) + kfree(obj); } void object_get(struct object *obj) { - unsigned long flags; - - spin_lock_irqsave(&cache_lock, flags); - __object_get(obj); - spin_unlock_irqrestore(&cache_lock, flags); + atomic_inc(&obj->refcnt); } /* Must be holding cache_lock */ @@ -65,7 +47,7 @@ { BUG_ON(!obj); list_del(&obj->list); - __object_put(obj); + object_put(obj); cache_num--; } @@ -94,7 +76,7 @@ strlcpy(obj->name, name, sizeof(obj->name)); obj->id = id; obj->popularity = 0; - obj->refcnt = 1; /* The cache holds a reference */ + atomic_set(&obj->refcnt, 1); /* The cache holds a reference */ spin_lock_irqsave(&cache_lock, flags); __cache_add(obj); @@ -119,7 +101,7 @@ spin_lock_irqsave(&cache_lock, flags); obj = __cache_find(id); if (obj) - __object_get(obj); + object_get(obj); spin_unlock_irqrestore(&cache_lock, flags); return obj; }