The next operation to consider is how to delete or remove an entry. This is surprisingly easy as it corresponds to an insert operation but one that decrements the count.
All you have to do is compute the hash functions and XOR the values again but subtract one from the count:
for each hi(x) do XOR B[hi(x)].key with x XOR B[hi(x)].value with y subtract one from B[hi(x)].count
This works because the XOR undoes the previous XOR operation - recall that it is its own inverse.
Notice that deleting a data value might reduce an element's count back to one and so it undoes any previous collisions. This is, in fact, also the way you can list as many values as possible from those that have been stored in the filter.
The operation corresponds to:
DELETE(x; y): delete the key-value pair, (x; y), from B. This operation always succeeds, provided (x; y) is in B.
Listing is the final operation and if you have been following the descriptions of the other operations you should be able to guess how it is going to work.
First all of the elements in the filter with a count of one have valid data and key storage elements. So the first task is to scan the storage and extract all of the elements with B[i].count=1. You can then add
to the output list.
Notice there will be duplicates but this doesn't matter as you can remove duplicate keys.
This isn't the end of what you can do, however, because if you remove all such entries from the filter then there is a chance that it will undo any collisions that they caused. If this reduces the count of any element to one then another data element can be retrieved.
So the full algorithm is:
while there is an B[i].count=1 do add (B[i].key,B[i].value) to the output list perform DELETE(B[i].key,B[i].value) end while if the list is not empty set incomplete_list return output_list
You can see that if all of the collisions can be undone then the list should be empty at the end of the operation,
This corresponds to the operation:
LISTENTRIES(): list all the key-value pairs being stored in B. With low (inverse polynomial in t) probability, this operation may return a partial list along with an incomplete_list status.
So how well does this work?
In terms of performance, if you are using k hash functions and the array has t elements then insertions, deletions and lookups take O(k) and listing takes O(t).
The probability that a GET operation will return a "not found" error, i.e. the data might be in the table but due to collisions it cannot be found, when the data actually is in the table is the same as the false positive rate for the corresponding Bloom filter. This can be made as small as you like by increasing the number of hash functions in use and the size of the table. The same is true of the probability of getting an incomplete listing of the data from the table - roughly O(t-k+2) as long as some conditions are met.
The invertible Bloom filter has lots of surprising uses including working out if two databases or two sets store the same elements. Basically what you do is create an invertible filter from one set, then delete all the elements of the other set and the elements that are left at the end of the operation are those that were only in one of the two sets.
You can even discover what the elements are and which set they belonged to, but this is another story.