Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • The server prefs file contains an order (comma separated) list of node addresses
  • The server reads that list of nodes on startup and caches the list somewhere on disk
  • If the server starts up and notices the list of nodes in the prefs file is different from the one it cached, then it needs to do a rebalance!

Look on http://en.wikipedia.org/wiki/List_of_hash_functions for a 64 bit hash function OR use a stronger hash function but only take the first 8 bytes out the output.

Code Block
ByteBuffer buffer = ByteBuffer.wrap(hashArray);
long hash = buffer.getLong();

Locator Hashing

Code Block
// We need to get pp distinct nodes for the pool, so we continually hash the hash of the locator until we have everything we need.
Integer[] partitionPool = new Integer[pp]
IntegerLong hash = null;
for(int i = 0; i < pp ; i++){
	int slot = -1;
	while(slot == -1 || partitionPool[slot] != null){
		slot = hash(hash == null ? locator : hash) % n; //n is number of nodes
	}
	partitionPool[i] = slot;
}
// We now have an array of node identifiers that form our partition pool.

...

Code Block
// We need to get rp nodes from the partition pool, so we continually hash the hash of the locator and key until we have everything we need
Integer[] redundancyPool = new Integer[rp]
IntegerLong hash = null;
for(int i = 0; i < rp; i++){
	int slot = -1;
	while(slot == -1 || redundancyPool[slot] != null){
		slot = hash(hash == null ? locator + key : hash) % partitionPool.length;
	}
	redundancyPool[i] = partitionPool[slot];
}
//We now have an array of node identifiers that for our redundancy pool.

 

 

 

...