Огляд
I2P's netDb is a specialized distributed database, containing just two types of data - router contact information (RouterInfos) and destination contact information (LeaseSets). Each piece of data is signed by the appropriate party and verified by anyone who uses or stores it. In addition, the data has liveliness information within it, allowing irrelevant entries to be dropped, newer entries to replace older ones, and protection against certain classes of attack.
The netDb is distributed with a simple technique called "floodfill", where a subset of all routers, called "floodfill routers", maintains the distributed database.
RouterInfo
When an I2P router wants to contact another router, they need to know some key pieces of data - all of which are bundled up and signed by the router into a structure called the "RouterInfo", which is distributed with the SHA256 of the router's identity as the key. The structure itself contains:
- The router's identity (an encryption key, a signing key, and a certificate)
- The contact addresses at which it can be reached
- Коли це було опубліковано
- A set of arbitrary text options
- The signature of the above, generated by the identity's signing key
Expected Options
The following text options, while not strictly required, are expected to be present:
- caps
(Capabilities flags - used to indicate floodfill participation, approximate bandwidth, and perceived reachability)
- D: Medium congestion (as of release 0.9.58)
- E: High congestion (as of release 0.9.58)
- f: Floodfill
- G: Rejecting all tunnels (as of release 0.9.58)
- H: Приховано
- K: Under 12 KBps shared bandwidth
- L: 12 - 48 KBps shared bandwidth (за замовчуванням)
- M: 48 - 64 KBps shared bandwidth
- N: 64 - 128 KBps shared bandwidth
- O: 128 - 256 KBps shared bandwidth
- P: 256 - 2000 KBps shared bandwidth (as of release 0.9.20)
- R: Доступність
- U: Недоступність
- X: Over 2000 KBps shared bandwidth (as of release 0.9.20)
For compatibility with older routers, a router may publish multiple bandwidth letters, for example "PO". - netId = 2 (Basic network compatibility - A router will refuse to communicate with a peer having a different netId)
- router.version (Used to determine compatibility with newer features and messages)
Notes on R/U capabilities: A router should usually publish the R or U capability, unless the reachability state is currently unknown. R means that the router is directly reachable (no introducers required, not firewalled) on at least one transport address. U means that the router is NOT directly reachable on ANY transport address.
Deprecated options:
coreVersion(Never used, removed in release 0.9.24)stat_uptime= 90m (Unused since version 0.7.9, removed in release 0.9.24)
These values are used by other routers for basic decisions. Should we connect to this router? Should we attempt to route a tunnel through this router? The bandwidth capability flag, in particular, is used only to determine whether the router meets a minimum threshold for routing tunnels. Above the minimum threshold, the advertised bandwidth is not used or trusted anywhere in the router, except for display in the user interface and for debugging and network analysis.
Valid NetID numbers:
Usage | NetID Number |
---|---|
Reserved | 0 |
Reserved | 1 |
Current Network (default) | 2 |
Reserved Future Networks | 3 - 15 |
Forks and Test Networks | 16 - 254 |
Reserved | 255 |
Додаткові Опції
Additional text options include a small number of statistics about the router's health, which are aggregated by sites such as stats.i2p for network performance analysis and debugging. These statistics were chosen to provide data crucial to the developers, such as tunnel build success rates, while balancing the need for such data with the side-effects that could result from revealing this data. Current statistics are limited to:
- Exploratory tunnel build success, reject, and timeout rates
- 1 hour average number of participating tunnels
These are optional, but if included, help analysis of network-wide performance. As of API 0.9.58, these statistics are simplified and standardized, as follows:
- Option keys are stat_(statname).(statperiod)
- Option values are ';' -separated
- Stats for event counts or normalized percentages use the 4th value; the first three values are unused but must be present
- Stats for average values use the 1st value, and no ';' separator is required
- For equal weighting of all routers in stats analysis, and for additional anonymity, routers should include these stats only after an uptime of one hour or more, and only one time every 16 times that the RI is published.
Example:
stat_tunnel.buildExploratoryExpire.60m = 0;0;0;53.14 stat_tunnel.buildExploratoryReject.60m = 0;0;0;15.51 stat_tunnel.buildExploratorySuccess.60m = 0;0;0;31.35 stat_tunnel.participatingTunnels.60m = 289.20
Floodfill routers may publish additional data on the number of entries in their network database. These are optional, but if included, help analysis of network-wide performance.
The following two options should be included by floodfill routers in every published RI:
- netdb.knownLeaseSets
- netdb.knownRouters
Example:
netdb.knownLeaseSets = 158 netdb.knownRouters = 11374
The data published can be seen in the router's user interface, but is not used or trusted by any other router.
Family Options
As of release 0.9.24, routers may declare that they are part of a "family", operated by the same entity. Multiple routers in the same family will not be used in a single tunnel.
The family options are:
- family (The family name)
- family.key The signature type code of the family's Signing Public Key (in ASCII digits) concatenated with ':' concatenated with the Signing Public Key in base 64
- family.sig The signature of ((family name in UTF-8) concatenated with (32 byte router hash)) in base 64
RouterInfo Expiration
RouterInfos have no set expiration time. Each router is free to maintain its own local policy to trade off the frequency of RouterInfo lookups with memory or disk usage. In the current implementation, there are the following general policies:
- There is no expiration during the first hour of uptime, as the persistent stored data may be old.
- There is no expiration if there are 25 or less RouterInfos.
- As the number of local RouterInfos grows, the expiration time shrinks, in an attempt to maintain a reasonable number RouterInfos. The expiration time with less than 120 routers is 72 hours, while expiration time with 300 routers is around 30 hours.
- RouterInfos containing SSU introducers expire in about an hour, as the introducer list expires in about that time.
- Floodfills use a short expiration time (1 hour) for all local RouterInfos, as valid RouterInfos will be frequently republished to them.
RouterInfo Persistent Storage
RouterInfos are periodically written to disk so that they are available after a restart.
It may be desirable to persistently store Meta LeaseSets with long expirations. This is implementation-dependent.
Дивися також:
LeaseSet
The second piece of data distributed in the netDb is a "LeaseSet" - documenting a group of tunnel entry points (leases) for a particular client destination. Each of these leases specify the following information:
- The tunnel gateway router (by specifying its identity)
- The tunnel ID on that router to send messages with (a 4 byte number)
- When that tunnel will expire.
The LeaseSet itself is stored in the netDb under the key derived from the SHA256 of the destination. One exception is for Encrypted LeaseSets (LS2), as of release 0.9.38. The SHA256 of the type byte (3) followed by the blinded public key is used for the DHT key, and then rotated as usual. See the Kademlia Closeness Metric section below.
In addition to these leases, the LeaseSet includes:
- The destination itself (an encryption key, a signing key and a certificate)
- Additional encryption public key: used for end-to-end encryption of garlic messages
- Additional signing public key: intended for LeaseSet revocation, but is currently unused.
- Signature of all the LeaseSet data, to make sure the Destination published the LeaseSet.
Lease specification
LeaseSet specification
Lease Javadoc
LeaseSet Javadoc
As of release 0.9.38, three new types of LeaseSets are defined; LeaseSet2, MetaLeaseSet, and EncryptedLeaseSet. See below.
Unpublished LeaseSets
A LeaseSet for a destination used only for outgoing connections is unpublished. It is never sent for publication to a floodfill router. "Client" tunnels, such as those for web browsing and IRC clients, are unpublished. Servers will still be able to send messages back to those unpublished destinations, because of I2NP storage messages.
Revoked LeaseSets
A LeaseSet may be revoked by publishing a new LeaseSet with zero leases. Revocations must be signed by the additional signing key in the LeaseSet. Revocations are not fully implemented, and it is unclear if they have any practical use. This is the only planned use for that signing key, so it is currently unused.
LeaseSet2 (LS2)
As of release 0.9.38, floodfills support a new LeaseSet2 structure. This structure is very similar to the old LeaseSet structure, and serves the same purpose. The new structure provides the flexibility required to support new encryption types, multiple encryption types, options, offline signing keys, and other features. See proposal 123 for details.
Meta LeaseSet (LS2)
As of release 0.9.38, floodfills support a new Meta LeaseSet structure. This structure provides a tree-like structure in the DHT, to refer to other LeaseSets. Using Meta LeaseSets, a site may implement large multihomed services, where several different Destinations are used to provide a common service. The entries in a Meta LeaseSet are Destinations or other Meta LeaseSets, and may have long expirations, up to 18.2 hours. Using this facility, it should be possible to run hundreds or thousands of Destinations hosting a common service. See proposal 123 for details.
Encrypted LeaseSets (LS1)
This section describes the old, insecure method of encrypting LeaseSets using a fixed symmetric key. See below for the LS2 version of Encrypted LeaseSets.
In an encrypted LeaseSet, all Leases are encrypted with a separate key. The leases may only be decoded, and thus the destination may only be contacted, by those with the key. There is no flag or other direct indication that the LeaseSet is encrypted. Encrypted LeaseSets are not widely used, and it is a topic for future work to research whether the user interface and implementation of encrypted LeaseSets could be improved.
Encrypted LeaseSets (LS2)
As of release 0.9.38, floodfills support a new, EncryptedLeaseSet structure. The Destination is hidden, and only a blinded public key and an expiration are visible to the floodfill. Only those that have the full Destination may decrypt the structure. The structure is stored at a DHT location based on the hash of the blinded public key, not the hash of the Destination. See proposal 123 for details.
LeaseSet Expiration
For regular LeaseSets, the expiration is the time of the latest expiration of its leases. For the new LeaseSet2 data structures, the expiration is specified in the header. For LeaseSet2, the expiration should match the latest expiration of its leases. For EncryptedLeaseSet and MetaLeaseSet, the expiration may vary, and maximum expiration may be enforced, to be determined.
LeaseSet Persistent Storage
No persistent storage of LeaseSet data is required, since they expire so quickly. Howewver, persistent storage of EncryptedLeaseSet and MetaLeaseSet data with long expirations may be advisable.
Encryption Key Selection (LS2)
LeaseSet2 may contain multiple encryption keys. The keys are in order of server preference, most-preferred first. Default client behavior is to select the first key with a supported encryption type. Clients may use other selection algorithms based on encryption support, relative performance, and other factors.
Bootstrapping
The netDb is decentralized, however you do need at
least one reference to a peer so that the integration process
ties you in. This is accomplished by "reseeding" your router with the RouterInfo
of an active peer - specifically, by retrieving their routerInfo-$hash.dat
file and storing it in your netDb/
directory. Anyone can provide
you with those files - you can even provide them to others by exposing your own
netDb directory. To simplify the process,
volunteers publish their netDb directories (or a subset) on the regular (non-i2p) network,
and the URLs of these directories are hardcoded in I2P.
When the router starts up for the first time, it automatically fetches from
one of these URLs, selected at random.
Floodfill
The floodfill netDb is a simple distributed storage mechanism. The storage algorithm is simple: send the data to the closest peer that has advertised itself as a floodfill router. When the peer in the floodfill netDb receives a netDb store from a peer not in the floodfill netDb, they send it to a subset of the floodfill netDb-peers. The peers selected are the ones closest (according to the XOR-metric) to a specific key.
Determining who is part of the floodfill netDb is trivial - it is exposed in each router's published routerInfo as a capability.
Floodfills have no central authority and do not form a "consensus" - they only implement a simple DHT overlay.
Floodfill Router Opt-in
Unlike Tor, where the directory servers are hardcoded and trusted, and operated by known entities, the members of the I2P floodfill peer set need not be trusted, and change over time.
To increase reliability of the netDb, and minimize the impact of netDb traffic on a router, floodfill is automatically enabled only on routers that are configured with high bandwidth limits. Routers with high bandwidth limits (which must be manually configured, as the default is much lower) are presumed to be on lower-latency connections, and are more likely to be available 24/7. The current minimum share bandwidth for a floodfill router is 128 KBytes/sec.
In addition, a router must pass several additional tests for health (outbound message queue time, job lag, etc.) before floodfill operation is automatically enabled.
With the current rules for automatic opt-in, approximately 6% of the routers in the network are floodfill routers.
While some peers are manually configured to be floodfill, others are simply high-bandwidth routers who automatically volunteer when the number of floodfill peers drops below a threshold. This prevents any long-term network damage from losing most or all floodfills to an attack. In turn, these peers will un-floodfill themselves when there are too many floodfills outstanding.
Floodfill Router Roles
A floodfill router's only services that are in addition to those of non-floodfill routers are in accepting netDb stores and responding to netDb queries. Since they are generally high-bandwidth, they are more likely to participate in a high number of tunnels (i.e. be a "relay" for others), but this is not directly related to their distributed database services.
Kademlia Closeness Metric
The netDb uses a simple Kademlia-style XOR metric to determine closeness. To create a Kademlia key, the SHA256 hash of the RouterIdentity or Destination is computed. One exception is for Encrypted LeaseSets (LS2), as of release 0.9.38. The SHA256 of the type byte (3) followed by the blinded public key is used for the DHT key, and then rotated as usual.
A modification to this algorithm is done to increase the costs of Sybil attacks. Instead of the SHA256 hash of the key being looked up of stored, the SHA256 hash is taken of the 32-byte binary search key appended with the UTC date represented as an 8-byte ASCII string yyyyMMdd, i.e. SHA256(key + yyyyMMdd). This is called the "routing key", and it changes every day at midnight UTC. Only the search key is modified in this way, not the floodfill router hashes. The daily transformation of the DHT is sometimes called "keyspace rotation", although it isn't strictly a rotation.
Routing keys are never sent on-the-wire in any I2NP message, they are only used locally for determination of distance.
Network Database Segmentation - Sub-Databases
Traditionally Kademlia-style DHT's are not concerned with preserving the unlinkability of information stored on any particular node in the DHT. For example, a piece of information may be stored to one node in the DHT, then requested back from that node unconditionally. Within I2P and using the netDb, this is not the case, information stored in the DHT may only be shared under certain known circumstances where it is "safe" to do so. This is to prevent a class of attacks where a malicious actor can try to associate a client tunnel with a router by sending a store to a client tunnel, then requesting it back directly from the suspected "Host" of the client tunnel.
Segmentation Structure
I2P routers can implement effective defenses against the attack class provided a few conditions are met. A network database implementation should be able to keep track of whether a database entry was recieved down a client tunnel or directly. If it was recieved down a client tunnel, then it should also keep track of which client tunnel it was recieved through, using the client's local destination. If the entry was recieved down multiple client tunnels, then the netDb should keep track of all destinations where the entry was observed. It should also keep track of whether an entry was recieved as a reply to a lookup, or as a store.
In both the Java and C++ implementations, this achieved by using a single "Main" netDb for direct lookups and floodfill operations first. This main netDb exists in the router context. Then, each client is given it's own version of the netDb, which is used to capture database entries sent to client tunnels and respond to lookups sent down client tunnels. We call these "Client Network Databases" or "Sub-Databases" and they exist in the client context. The netDb operated by the client exists for the lifetime of the client only and contains only entries that are communicated with the client's tunnels. This makes it impossible for entries sent down client tunnels to overlap with entries sent directly to the router.
Additionally, each netDb needs to be able to remember if a database entry was recieved because it was sent to one of our destinations, or because it was requested by us as part of a lookup. If a database entry it was recieved as a store, as in some other router sent it to us, then a netDb should respond to requests for the entry when another router looks up the key. However, if it was recieved as a reply to a query, then the netDb should only reply to a query for the entry if the entry had already been stored to the same destination. A client should never answer queries with an entry from the main netDb, only it's own client network database.
These strategies should be taken and used combined so that both are applied. In combination, they "Segment" the netDb and secure it against attacks.
Storage, Verification, and Lookup Mechanics
RouterInfo Storage to Peers
I2NP DatabaseStoreMessages containing the local RouterInfo are exchanged with peers as a part of the initialization of a NTCP or SSU transport connection.
LeaseSet Storage to Peers
I2NP DatabaseStoreMessages containing the local LeaseSet are periodically exchanged with peers by bundling them in a garlic message along with normal traffic from the related Destination. This allows an initial response, and later responses, to be sent to an appropriate Lease, without requiring any LeaseSet lookups, or requiring the communicating Destinations to have published LeaseSets at all.
Floodfill Selection
The DatabaseStoreMessage should be sent to the floodfill that is closest to the current routing key for the RouterInfo or LeaseSet being stored. Currently, the closest floodfill is found by a search in the local database. Even if that floodfill is not actually closest, it will flood it "closer" by sending it to multiple other floodfills. This provides a high degree of fault-tolerance.
In traditional Kademlia, a peer would do a "find-closest" search before inserting an item in the DHT to the closest target. As the verify operation will tend to discover closer floodfills if they are present, a router will quickly improve its knowledge of the DHT "neighborhood" for the RouterInfo and LeaseSets it regularly publishes. While I2NP does not define a "find-closest" message, if it becomes necessary, a router may simply do an iterative search for a key with the least significant bit flipped (i.e. key ^ 0x01) until no closer peers are received in the DatabaseSearchReplyMessages. This ensures that the true closest peer will be found even if a more-distant peer had the netdb item.
RouterInfo Storage to Floodfills
A router publishes its own RouterInfo by directly connecting to a floodfill router and sending it a I2NP DatabaseStoreMessage with a nonzero Reply Token. The message is not end-to-end garlic encrypted, as this is a direct connection, so there are no intervening routers (and no need to hide this data anyway). The floodfill router replies with a I2NP DeliveryStatusMessage, with the Message ID set to the value of the Reply Token.
In some circumstances, a router may also send the RouterInfo DatabaseStoreMessage out an exploratory tunnel; for example, due to connection limits, connection incompatibility, or a desire to hide the actual IP from the floodfill. The floodfill may not accept such a store in times of overload or based on other criteria; whether to explicitly declare non-direct store of a RouterInfo illegal is a topic for further study.
LeaseSet Storage to Floodfills
Storage of LeaseSets is much more sensitive than for RouterInfos, as a router must take care that the LeaseSet cannot be associated with the router.
A router publishes a local LeaseSet by sending a I2NP DatabaseStoreMessage with a nonzero Reply Token over an outbound client tunnel for that Destination. The message is end-to-end garlic encrypted using the Destination's Session Key Manager, to hide the message from the tunnel's outbound endpoint. The floodfill router replies with a I2NP DeliveryStatusMessage, with the Message ID set to the value of the Reply Token. This message is sent back to one of the client's inbound tunnels.
Flooding
Like any router, a floodfill uses various criteria to validate the LeaseSet or RouterInfo before storing it locally. These criteria may be adaptive and dependent on current conditions including current load, netdb size, and other factors. All validation must be done before flooding.
After a floodfill router receives a DatabaseStoreMessage containing a valid RouterInfo or LeaseSet which is newer than that previously stored in its local NetDb, it "floods" it. To flood a NetDb entry, it looks up several (currently 3) floodfill routers closest to the routing key of the NetDb entry. (The routing key is the SHA256 Hash of the RouterIdentity or Destination with the date (yyyyMMdd) appended.) By flooding to those closest to the key, not closest to itself, the floodfill ensures that the storage gets to the right place, even if the storing router did not have good knowledge of the DHT "neighborhood" for the routing key.
The floodfill then directly connects to each of those peers and sends it a I2NP DatabaseStoreMessage with a zero Reply Token. The message is not end-to-end garlic encrypted, as this is a direct connection, so there are no intervening routers (and no need to hide this data anyway). The other routers do not reply or re-flood, as the Reply Token is zero.
Floodfills must not flood via tunnels; the DatabaseStoreMessage must be sent over a direct connection.
Floodfills must never flood an expired LeaseSet or a RouterInfo published more than one hour ago.
RouterInfo and LeaseSet Lookup
The I2NP DatabaseLookupMessage is used to request a netdb entry from a floodfill router. Lookups are sent out one of the router's outbound exploratory tunnels. The replies are specified to return via one of the router's inbound exploratory tunnels.
Lookups are generally sent to the two "good" (the connection doesn't fail) floodfill routers closest to the requested key, in parallel.
If the key is found locally by the floodfill router, it responds with a I2NP DatabaseStoreMessage. If the key is not found locally by the floodfill router, it responds with a I2NP DatabaseSearchReplyMessage containing a list of other floodfill routers close to the key.
LeaseSet lookups are garlic encrypted end-to-end as of release 0.9.5. RouterInfo lookups are not encrypted and thus are vulnerable to snooping by the outbound endpoint (OBEP) of the client tunnel. This is due to the expense of the ElGamal encryption. RouterInfo lookup encryption may be enabled in a future release.
As of release 0.9.7, replies to a LeaseSet lookup (a DatabaseStoreMessage or a DatabaseSearchReplyMessage) will be encrypted by including the session key and tag in the lookup. This hides the reply from the inbound gateway (IBGW) of the reply tunnel. Responses to RouterInfo lookups will be encrypted if we enable the lookup encryption.
(Reference: Hashing it out in Public Sections 2.2-2.3 for terms below in italics)
Due to the relatively small size of the network and flooding redundancy, lookups are usually O(1) rather than O(log n). A router is highly likely to know a floodfill router close enough to the key to get the answer on the first try. In releases prior to 0.8.9, routers used a lookup redundancy of two (that is, two lookups were performed in parallel to different peers), and neither recursive nor iterative routing for lookups was implemented. Queries were sent through multiple routes simultaneously to reduce the chance of query failure.
As of release 0.8.9, iterative lookups are implemented with no lookup redundancy. This is a more efficient and reliable lookup that will work much better when not all floodfill peers are known, and it removes a serious limitation to network growth. As the network grows and each router knows only a small subset of the floodfill peers, lookups will become O(log n). Even if the peer does not return references closer to the key, the lookup continues with the next-closest peer, for added robustness, and to prevent a malicious floodfill from black-holing a part of the key space. Lookups continue until a total lookup timeout is reached, or the maximum number of peers is queried.
Node IDs are verifiable in that we use the router hash directly as both the node ID and the Kademlia key. Incorrect responses that are not closer to the search key are generally ignored. Given the current size of the network, a router has detailed knowledge of the neighborhood of the destination ID space.
RouterInfo Storage Verification
Note: RouterInfo verification is disabled as of release 0.9.7.1 to prevent the attack described in the paper Practical Attacks Against the I2P Network. It is not clear if verification can be redesigned to be done safely.
To verify a storage was successful, a router simply waits about 10 seconds, then sends a lookup to another floodfill router close to the key (but not the one the store was sent to). Lookups sent out one of the router's outbound exploratory tunnels. Lookups are end-to-end garlic encrypted to prevent snooping by the outbound endpoint(OBEP).
LeaseSet Storage Verification
To verify a storage was successful, a router simply waits about 10 seconds, then sends a lookup to another floodfill router close to the key (but not the one the store was sent to). Lookups sent out one of the outbound client tunnels for the destination of the LeaseSet being verified. To prevent snooping by the OBEP of the outbound tunnel, lookups are end-to-end garlic encrypted. The replies are specified to return via one of the client's inbound tunnels.
As of release 0.9.7, replies for both RouterInfo and LeaseSet lookups (a DatabaseStoreMessage or a DatabaseSearchReplyMessage) will be encrypted, to hide the reply from the inbound gateway (IBGW) of the reply tunnel.
Exploration
Exploration is a special form of netdb lookup, where a router attempts to learn about new routers. It does this by sending a floodfill router a I2NP DatabaseLookup Message, looking for a random key. As this lookup will fail, the floodfill would normally respond with a I2NP DatabaseSearchReplyMessage containing hashes of floodfill routers close to the key. This would not be helpful, as the requesting router probably already knows those floodfills, and it would be impractical to add all floodfill routers to the "don't include" field of the DatabaseLookup Message. For an exploration query, the requesting router sets a special flag in the DatabaseLookup Message. The floodfill will then respond only with non-floodfill routers close to the requested key.
Notes on Lookup Responses
The response to a lookup request is either a Database Store Message (on success) or a Database Search Reply Message (on failure). The DSRM contains a 'from' router hash field to indicate the source of the reply; the DSM does not. The DSRM 'from' field is unauthenticated and may be spoofed or invalid. There are no other response tags. Therefore, when making multiple requests in parallel, it is difficult to monitor the performance of the various floodfill routers.
MultiHoming
Destinations may be hosted on multiple routers simultaneously, by using the same private and public keys (traditionally stored in eepPriv.dat files). As both instances will periodically publish their signed LeaseSets to the floodfill peers, the most recently published LeaseSet will be returned to a peer requesting a database lookup. As LeaseSets have (at most) a 10 minute lifetime, should a particular instance go down, the outage will be 10 minutes at most, and generally much less than that. The multihoming function has been verified and is in use by several services on the network.
As of release 0.9.38, floodfills support a new Meta LeaseSet structure. This structure provides a tree-like structure in the DHT, to refer to other LeaseSets. Using Meta LeaseSets, a site may implement large multihomed services, where several different Destinations are used to provide a common service. The entries in a Meta LeaseSet are Destinations or other Meta LeaseSets, and may have long expirations, up to 18.2 hours. Using this facility, it should be possible to run hundreds or thousands of Destinations hosting a common service. See proposal 123 for details.
Threat Analysis
Also discussed on the threat model page.
A hostile user may attempt to harm the network by creating one or more floodfill routers and crafting them to offer bad, slow, or no responses. Some scenarios are discussed below.
General Mitigation Through Growth
There are currently around 1700 floodfill routers in the network. Most of the following attacks will become more difficult, or have less impact, as the network size and number of floodfill routers increase.
General Mitigation Through Redundancy
Via flooding, all netdb entries are stored on the 3 floodfill routers closest to the key.
Forgeries
All netdb entries are signed by their creators, so no router may forge a RouterInfo or LeaseSet.
Slow or Unresponsive
Each router maintains an expanded set of statistics in the peer profile for each floodfill router, covering various quality metrics for that peer. The set includes:
- Average response time
- Percentage of queries answered with the data requested
- Percentage of stores that were successfully verified
- Last successful store
- Last successful lookup
- Останний відповідь
Each time a router needs to make a determination on which floodfill router is closest to a key, it uses these metrics to determine which floodfill routers are "good". The methods, and thresholds, used to determine "goodness" are relatively new, and are subject to further analysis and improvement. While a completely unresponsive router will quickly be identified and avoided, routers that are only sometimes malicious may be much harder to deal with.
Sybil Attack (Full Keyspace)
An attacker may mount a Sybil attack by creating a large number of floodfill routers spread throughout the keyspace.
(In a related example, a researcher recently created a large number of Tor relays.) If successful, this could be an effective DOS attack on the entire network.
If the floodfills are not sufficiently misbehaving to be marked as "bad" using the peer profile metrics described above, this is a difficult scenario to handle. Tor's response can be much more nimble in the relay case, as the suspicious relays can be manually removed from the consensus. Some possible responses for the I2P network are listed below, however none of them is completely satisfactory:
- Compile a list of bad router hashes or IPs, and announce the list through various means (console news, website, forum, etc.); users would have to manually download the list and add it to their local "blacklist".
- Ask everyone in the network to enable floodfill manually (fight Sybil with more Sybil)
- Release a new software version that includes the hardcoded "bad" list
- Release a new software version that improves the peer profile metrics and thresholds, in an attempt to automatically identify the "bad" peers.
- Add software that disqualifies floodfills if too many of them are in a single IP block
- Implement an automatic subscription-based blacklist controlled by a single individual or group. This would essentially implement a portion of the Tor "consensus" model. Unfortunately it would also give a single individual or group the power to block participation of any particular router or IP in the network, or even to completely shutdown or destroy the entire network.
This attack becomes more difficult as the network size grows.
Sybil Attack (Partial Keyspace)
An attacker may mount a Sybil attack by creating a small number (8-15) of floodfill routers clustered closely in the keyspace, and distribute the RouterInfos for these routers widely. Then, all lookups and stores for a key in that keyspace would be directed to one of the attacker's routers. If successful, this could be an effective DOS attack on a particular I2P Site, for example.
As the keyspace is indexed by the cryptographic (SHA256) Hash of the key, an attacker must use a brute-force method to repeatedly generate router hashes until he has enough that are sufficiently close to the key. The amount of computational power required for this, which is dependent on network size, is unknown.
As a partial defense against this attack, the algorithm used to determine Kademlia "closeness" varies over time. Rather than using the Hash of the key (i.e. H(k)) to determine closeness, we use the Hash of the key appended with the current date string, i.e. H(k + YYYYMMDD). A function called the "routing key generator" does this, which transforms the original key into a "routing key". In other words, the entire netdb keyspace "rotates" every day at UTC midnight. Any partial-keyspace attack would have to be regenerated every day, for after the rotation, the attacking routers would no longer be close to the target key, or to each other.
This attack becomes more difficult as the network size grows. However, recent research demonstrates that the keyspace rotation is not particularly effective. An attacker can precompute numerous router hashes in advance, and only a few routers are sufficient to "eclipse" a portion of the keyspace within a half hour after rotation.
One consequence of daily keyspace rotation is that the distributed network database may become unreliable for a few minutes after the rotation -- lookups will fail because the new "closest" router has not received a store yet. The extent of the issue, and methods for mitigation (for example netdb "handoffs" at midnight) are a topic for further study.
Bootstrap Attacks
An attacker could attempt to boot new routers into an isolated or majority-controlled network by taking over a reseed website, or tricking the developers into adding his reseed website to the hardcoded list in the router.
Several defenses are possible, and most of these are planned:
- Disallow fallback from HTTPS to HTTP for reseeding. A MITM attacker could simply block HTTPS, then respond to the HTTP.
- Bundling reseed data in the installer
Defenses that are implemented:
- Changing the reseed task to fetch a subset of RouterInfos from each of several reseed sites rather than using only a single site
- Creating an out-of-network reseed monitoring service that periodically polls reseed websites and verifies that the data are not stale or inconsistent with other views of the network
- As of release 0.9.14, reseed data is bundled into a signed zip file and the signature is verified when downloaded. See the su3 specification for details.
Query Capture
See also lookup (Reference: Hashing it out in Public Sections 2.2-2.3 for terms below in italics)
Similar to a bootstrap attack, an attacker using a floodfill router could attempt to "steer" peers to a subset of routers controlled by him by returning their references.
This is unlikely to work via exploration, because exploration is a low-frequency task. Routers acquire a majority of their peer references through normal tunnel building activity. Exploration results are generally limited to a few router hashes, and each exploration query is directed to a random floodfill router.
As of release 0.8.9, iterative lookups are implemented. For floodfill router references returned in a I2NP DatabaseSearchReplyMessage response to a lookup, these references are followed if they are closer (or the next closest) to the lookup key. The requesting router does not trust that the references are closer to the key (i.e. they are verifiably correct. The lookup also does not stop when no closer key is found, but continues by querying the next-closet node, until the timeout or maximum number of queries is reached. This prevents a malicious floodfill from black-holing a part of the key space. Also, the daily keyspace rotation requires an attacker to regenerate a router info within the desired key space region. This design ensures that the query capture attack described in Hashing it out in Public is much more difficult.
DHT-Based Relay Selection
(Reference: Hashing it out in Public Section 3)
This doesn't have much to do with floodfill, but see the peer selection page for a discussion of the vulnerabilities of peer selection for tunnels.
Information Leaks
(Reference: In Search of an Anonymous and Secure Lookup Section 3)
This paper addresses weaknesses in the "Finger Table" DHT lookups used by Torsk and NISAN. At first glance, these do not appear to apply to I2P. First, the use of DHT by Torsk and NISAN is significantly different from that in I2P. Second, I2P's network database lookups are only loosely correlated to the peer selection and tunnel building processes; only previously-known peers are used for tunnels. Also, peer selection is unrelated to any notion of DHT key-closeness.
Some of this may actually be more interesting when the I2P network gets much larger. Right now, each router knows a large proportion of the network, so looking up a particular Router Info in the network database is not strongly indicative of a future intent to use that router in a tunnel. Perhaps when the network is 100 times larger, the lookup may be more correlative. Of course, a larger network makes a Sybil attack that much harder.
However, the general issue of DHT information leakage in I2P needs further investigation. The floodfill routers are in a position to observe queries and gather information. Certainly, at a level of f = 0.2 (20% malicious nodes, as specifed in the paper) we expect that many of the Sybil threats we describe (here, here and here) become problematic for several reasons.
Історія
Moved to the netdb discussion page.
Future Work
End-to-end encryption of additional netDb lookups and responses.
Better methods for tracking lookup responses.