|
|
|
Sources:
|
|
|
|
* https://chat.briarproject.org/briar/pl/mziythoxm3rpzqdbdqj1bmthfc
|
|
|
|
* https://chat.briarproject.org/briar/pl/qjazy5a86ins7r6c9dmnub3wba
|
|
|
|
|
|
|
|
Based on the paper "RESCUE: A Resilient and Secure Device-to-Device Communication Framework for Emergencies" (2022, [DOI](https://doi.org/10.26083/tuprints-00017838)), akwizgran had some thoughts on how this could be useful for eventual meshes in Briar.
|
|
|
|
|
|
|
|
> some quick thoughts about mesh protocols, after reading the RESCUE paper that nico posted to the readings channel:
|
|
|
|
> * some form of fair queueing is useful for mitigating the impact of message-flooding attacks
|
|
|
|
> * we don't want to use certified identities, like RESCUE
|
|
|
|
> * but in a DTN, encounters between devices are scarce. an attacker can't just manufacture an unlimited number of encounters, in the way that some kinds of (uncertified) identities can be manufactured
|
|
|
|
> * so we keep a queue of messages received from each device we've encountered (if we receive a message more than once, we only consider the first time we received it)
|
|
|
|
> * when syncing messages, we round-robin (or random-robin) over the queues to select the next message to send
|
|
|
|
> * if an attacker floods the network with messages, devices that encounter the attacker will have one very long queue for messages received from the attacker, and a bunch of ordinary length queues for messages received from other devices
|
|
|
|
> * when one of those devices syncs with a device that hasn't encountered the attacker, will either pass on some or all of the messages it wants to send, depending on the encounter time
|
|
|
|
> * if the encounter is short, all the queues will be treated fairly so most of the attacker's messages won't be passed on
|
|
|
|
> * if the encounter is long, all of the messages will be passed on, including all of the attacker's messages
|
|
|
|
> * the result is that when the network has spare capacity, the attacker can fill that capacity. but when the network becomes full, the attack will mainly affect devices that have encountered the attacker: it will reduce the number of non-attacker messages they pass on. devices further from the attacker will be less affected
|
|
|
|
>
|
|
|
|
> this is not a new idea, but until today i wasn't sure how to square it with another idea that's meant to speed up repeated syncs between the same devices:
|
|
|
|
> * when two devices start syncing, they need to find out which messages they're each storing
|
|
|
|
> * if there are thousands of messages in circulation then this could require tens of thousands of bytes to be exchanged at the start of every sync
|
|
|
|
> * for short encounters or slow transports, this could easily take up the whole encounter
|
|
|
|
> * we expect the same devices to sync with each other repeatedly, and we expect that many of the messages they're storing will be the same from one sync to the next, so there's an opportunity to optimise the sync process by caching information about which messages other devices are storing
|
|
|
|
> * a simple way to do this is to keep the list of messages we're storing in the order we received them
|
|
|
|
> * we can cache the lists sent by devices we've synced with recently
|
|
|
|
> * if we encounter them again, we only need to ask which messages they've received since the time when we cached the list (or, since timestamps are tricky, we can use some other value like an offset to explain to them which part of their list we've already cached)
|
|
|
|
> * but this only works if the list is in order of arrival. if it's in some other order then we can't assume that the list we received before is a prefix of the current list
|
|
|
|
> * i couldn't see how to square this with fair queueing. but actually the order of the list doesn't have to be the same as the order in which messages are sent
|
|
|
|
> * two devices can send their lists to each other, using caching if they've synced before, and then once they've worked out which messages they need to send, they can each send those messages in whatever order they like
|
|
|
|
> * so we can cache the list of messages for efficient repeated sync, and also use fair queueing to mitigate flooding attacks (as well as other things that would affect the order of transmission, like local rules for prioritising messages)
|
|
|
|
> * hooray
|
|
|
|
>
|
|
|
|
> oh, i forgot to mention something about the fair queueing idea:
|
|
|
|
> * an attacker may be able to manufacture whatever identities we use to distinguish between the devices we've encountered (bluetooth addresses or whatever)
|
|
|
|
> * if an attacker does this then they can effectively use all the round-robin/random-robin slots of the devices they encounter, by using a fresh identifier for syncing each message and therefore getting a separate queue for each message
|
|
|
|
> * in this scenario, devices that have encountered the attacker almost exclusively send the attacker's messages. so they operate like the attacker itself in the original scenario
|
|
|
|
> * the fair queueing defence still applies, but the number of attackers is effectively increased
|
|
|
|
>
|
|
|
|
> another interesting idea from the RESCUE paper is to have each device assign a small number of other devices to a "priority set" and protect those devices' messages from being pushed out of storage
|
|
|
|
>
|
|
|
|
> even if the priority sets are chosen randomly, this provides some protection against flooding attacks. i guess intuitively, for any given sender, there are a few devices that are immune to the flooding attack with respect to that sender (the devices that have the sender in their priority sets) and those devices can potentially pass on the sender's messages to each other and eventually to their destinations
|
|
|
|
>
|
|
|
|
> i wonder if there's some way to implement this without certified identities |
|
|
\ No newline at end of file |