What about artificially increasing the size of the hash input.
For instance: when finding out if a a contact exists the client sends 10 iterations of the contact (I.E. the contact number with another digit, each 0-9) hashed, the server keeps only one random hashed one to compare to (presumably the client hashes one of them randomly when sending it's own contact information).
That way you increase the size of the problem from 10^10 to 10^11 but only increase your bandwidth 10 fold.
It still keeps the need to balance the amount of bandwidth used, but the bandwidth is no longer determined by the amount of all the users in the system, only the increase in problem size and the amount of contacts a person has.
That way you increase the size of the problem from 10^10 to 10^11 but only increase your bandwidth 10 fold.
It still keeps the need to balance the amount of bandwidth used, but the bandwidth is no longer determined by the amount of all the users in the system, only the increase in problem size and the amount of contacts a person has.
Is that a worthwhile solution?