Go IPFS v0.4.20 Release Notes

Release Date: 2019-04-16 // about 5 years ago
  • ๐Ÿš‘ We're happy to release go-ipfs 0.4.20. This release includes some critical
    ๐ŸŽ performance and stability fixes so all users should upgrade ASAP.

    ๐Ÿš€ This is also the first release to use go modules instead of GX. While GX has
    ๐Ÿ— been a great way to dogfood an IPFS-based package manager, building and
    maintaining a custom package manager is a lot of work and we haven't been able
    to dedicate enough time to bring the user experience of gx to an acceptable
    level. You can read #5850 for
    some discussion on this matter.

    ๐Ÿณ Docker

    ๐Ÿš€ As of this release, it's now much easier to run arbitrary IPFS commands within
    ๐Ÿณ the docker container:

    \> docker run --name my-ipfs ipfs/go-ipfs:v0.4.20 config profile apply server # apply the server profile\> docker start my-ipfs # start the daemon
    

    ๐Ÿš€ This release also reverts a change that
    ๐Ÿณ caused some significant trouble in 0.4.19. If you've been running into Docker
    โฌ†๏ธ permission errors in 0.4.19, please upgrade.

    WebUI

    ๐Ÿš€ This release contains a major
    ๐Ÿš€ WebUI release with some
    ๐Ÿ’ป significant improvements to the file browser and new opt-in, privately hosted,
    ๐Ÿ“ˆ anonymous usage analytics.

    Commands

    As usual, we've made several changes and improvements to our commands. The most
    notable changes are listed in this section.

    ๐Ÿ†• New: ipfs version deps

    ๐Ÿš€ This release includes a new command, ipfs version deps, to list all
    ๐Ÿ— dependencies (with versions) of the current go-ipfs build. This should make it
    easy to tell exactly how go-ipfs was built when tracking down issues.

    ๐Ÿ†• New: ipfs add URL

    ๐Ÿ‘ The ipfs add command has gained support for URLs. This means you can:

    1. Add files with ipfs add URL instead of downloading the file first. ๐Ÿ—„ 2. Replace all uses of the ipfs urlstore command with a call to ipfs add --nocopy. The ipfs urlstore command will be deprecated in a future
      ๐Ÿš€ release.

    ๐Ÿ”„ Changed: ipfs swarm connect

    The ipfs swarm connect command has a few new features:

    It now marks the newly created connection as "important". This should ensure
    that the connection manager won't come along later and close the connection if
    it doesn't think it's being used.

    It can now resolve /dnsaddr addresses that don't end in a peer ID. For
    example, you can now run ipfs swarm connect /dnsaddr/bootstrap.libp2p.io to
    connect to one of the bootstrap peers at random. NOTE: This could connect you to
    an arbitrary peer as DNS is not secure (by default). Please do not rely on
    โœ… this except for testing or unless you know what you're doing.

    Finally, ipfs swarm connect now returns all errors on failure. This should
    ๐Ÿ‘€ make it much easier to debug connectivity issues. For example, one might see an
    error like:

    Error: connect QmYou failure: dial attempt failed: 6 errors occurred:
        * <peer.ID Qm*Me> --> <peer.ID Qm*You> (/ip4/127.0.0.1/tcp/4001) dial attempt failed: dial tcp4 127.0.0.1:4001: connect: connection refused
        * <peer.ID Qm*Me> --> <peer.ID Qm*You> (/ip6/::1/tcp/4001) dial attempt failed: dial tcp6 [::1]:4001: connect: connection refused
        * <peer.ID Qm*Me> --> <peer.ID Qm*You> (/ip6/2604::1/tcp/4001) dial attempt failed: dial tcp6 [2604::1]:4001: connect: network is unreachable
        * <peer.ID Qm*Me> --> <peer.ID Qm*You> (/ip6/2602::1/tcp/4001) dial attempt failed: dial tcp6 [2602::1]:4001: connect: network is unreachable
        * <peer.ID Qm*Me> --> <peer.ID Qm*You> (/ip4/150.0.1.2/tcp/4001) dial attempt failed: dial tcp4 0.0.0.0:4001->150.0.1.2:4001: i/o timeout
        * <peer.ID Qm*Me> --> <peer.ID Qm*You> (/ip4/200.0.1.2/tcp/4001) dial attempt failed: dial tcp4 0.0.0.0:4001->200.0.1.2:4001: i/o timeout
    

    ๐Ÿ”„ Changed: ipfs bitswap stat

    ipfs bitswap stat no longer lists bitswap partners unless the -v flag is
    passed. That is, it will now return:

    > ipfs bitswap stat
    bitswap status
        provides buffer: 0 / 256
        blocks received: 0
        blocks sent: 79
        data received: 0
        data sent: 672706
        dup blocks received: 0
        dup data received: 0 B
        wantlist [0 keys]
        partners [197]
    

    Instead of:

    > ipfs bitswap stat -v
    bitswap status
        provides buffer: 0 / 256
        blocks received: 0
        blocks sent: 79
        data received: 0
        data sent: 672706
        dup blocks received: 0
        dup data received: 0 B
        wantlist [0 keys]
        partners [203]
            QmNQTTTRCDpCYCiiu6TYWCqEa7ShAUo9jrZJvWngfSu1mL
            QmNWaxbqERvdcgoWpqAhDMrbK2gKi3SMGk3LUEvfcqZcf4
            QmNgSVpgZVEd41pBX6DyCaHRof8UmUJLqQ3XH2qNL9xLvN
            ... omitting 200 lines ...
    

    ๐Ÿ”„ Changed: ipfs repo stat --human

    The --human flag in the ipfs repo stat command now intelligently picks a
    size unit instead of always using MiB.

    ๐Ÿ”„ Changed: ipfs resolve (ipfs dns, ipfs name resolve)

    All of the resolve commands now:

    1. Resolve recursively (up to 32 steps) by default to better match user
      0๏ธโƒฃ expectations (these commands used to be non-recursive by default). To turn
      recursion off, pass -r false.
    2. When resolving non-recursively, these commands no longer fail when partially
      resolving a name. Instead, they simply return the intermediate result.

    ๐Ÿ”„ Changed: ipfs files flush

    The ipfs files flush command now returns the CID of the flushed file.

    ๐ŸŽ Performance And Reliability

    ๐Ÿš€ This release has the usual collection of performance and reliability
    ๐Ÿ‘Œ improvements.

    Badger Memory Usage

    Those of you using the badger datastore should notice reduced memory usage in
    ๐Ÿš€ this release due to some upstream changes. Badger still uses significantly more
    ๐Ÿ”ง memory than the default datastore configuration but this will hopefully continue
    to improve.

    Bitswap

    ๐Ÿš‘ We fixed some critical CPU utilization regressions in bitswap for this release.
    If you've been noticing CPU regressions in go-ipfs 0.4.19, especially when
    โฌ†๏ธ running a public gateway, upgrading to 0.4.20 will likely fix them.

    Relays

    After AutoRelay was introduced in go-ipfs 0.4.19, the number of peers connecting
    through relays skyrocketed to over 120K concurrent peers. This highlighted some
    ๐Ÿš€ performance issues that we've now fixed in this release. Specifically:

    • We've significantly reduced the amount of memory allocated per-peer.
    • ๐Ÿ›  We've fixed a bug where relays might, in rare cases, try to actively dial a
      0๏ธโƒฃ peer to relay traffic. By default, relays only forward traffic between peers
      already connected to the relay.
    • ๐ŸŽ We've fixed quite a number of performance issues that only show up when
      rapidly forming new connections. This will actually help all nodes but will
      especially help relays.

    If you've enabled relay hop (Swarm.EnableRelayHop) in go-ipfs 0.4.19 and it
    ๐Ÿš€ hasn't burned down your machine yet, this release should improve things
    significantly. However, relays are still under heavy load so running an open
    relay will continue to be resource intensive.

    We're continuing to investigate this issue and have a few more patches on the
    ๐Ÿš€ way that, unfortunately, won't make it into this release.

    Panics

    ๐Ÿš€ We've fixed two notable panics in this release:

    • ๐Ÿ›  We've fixed a frequent panic in the DHT.
    • ๐Ÿ›  We've fixed an occasional panic in the experimental QUIC transport.

    Content Routing

    IPFS announces and finds content by sending and retrieving content routing
    ("provider") records to and from the DHT. Unfortunately, sending out these
    records can be quite resource intensive.

    ๐Ÿš€ This release has two changes to alleviate this: a reduced number of initial
    ๐Ÿ‘ท provide workers and a persistent provider queue.

    ๐Ÿ‘ท We've reduced the number of parallel initial provide workers (workers that send
    out provider records when content is initially added to go-ipfs) from 512 to 6.
    Each provide request (currently, due to some issues in our DHT) tries to
    ๐ŸŽ establish hundreds of connections, significantly impacting the performance of
    go-ipfs and crashing some
    routers
    .

    We've introduced a new persistent provider queue for files added via ipfs add
    ๐Ÿ“Œ and ipfs pin add. When new directory trees are added to go-ipfs, go-ipfs will
    โž• add the root/final CID to this queue. Then, in the background, go-ipfs will walk
    the queue, sequentially sending out provider records for each CID.

    This ensures that root CIDs are sent out as soon as possible and are sent even
    when files are added when the go-ipfs daemon isn't running.

    By example, let's add a directory tree to go-ipfs:

    \> # We're going to do this in "online" mode first so let's start the daemon.\> ipfs daemon &... Daemon is ready\> # Now, we're going to create a directory to add.\> mkdir foo\> for i in {0..1000}; do echo do echo $i \> foo/$i; done\> # finally, we're going to add it.\> ipfs add -r foo added QmUQcSjQx2bg4cSe2rUZyQi6F8QtJFJb74fWL7D784UWf9 foo/0 ... added QmQac2chFyJ24yfG2Dfuqg1P5gipLcgUDuiuYkQ5ExwGap foo/990 added QmQWwz9haeQ5T2QmQeXzqspKdowzYELShBCLzLJjVa2DuV foo/991 added QmQ5D4MtHUN4LTS4n7mgyHyaUukieMMyCfvnzXQAAbgTJm foo/992 added QmZq4n4KRNq3k1ovzxJ4qdQXZSrarfJjnoLYPR3ztHd7EY foo/993 added QmdtrsuVf8Nf1s1MaSjLAd54iNqrn1KN9VoFNgKGnLgjbt foo/994 added QmbstvU9mnW2hsE94WFmw5WbrXdLTu2Sf9kWWSozrSDscL foo/995 added QmXFd7f35gAnmisjfFmfYKkjA3F3TSpvUYB9SXr6tLsdg8 foo/996 added QmV5BxS1YQ9V227Np2Cq124cRrFDAyBXNMqHHa6kpJ9cr6 foo/997 added QmcXsccUtwKeQ1SuYC3YgyFUeYmAR9CXwGGnT3LPeCg5Tx foo/998 added Qmc4mcQcpaNzyDQxQj5SyxwFg9ZYz5XBEeEZAuH4cQirj9 foo/999 added QmXpXzUhcS9edmFBuVafV5wFXKjfXkCQcjAUZsTs7qFf3G foo
    

    In 0.4.19, we would have sent out provider records for files foo/{0..1000}
    before sending out a provider record for foo. If you were ask a friend to
    download /ipfs/QmUQcSjQx2bg4cSe2rUZyQi6F8QtJFJb74fWL7D784UWf9, they would
    (baring other issues) be able to find it pretty quickly as this is the first CID
    you'll have announced to the network. However, if you ask your friend to
    download /ipfs/QmXpXzUhcS9edmFBuVafV5wFXKjfXkCQcjAUZsTs7qFf3G/0, they'll have to
    wait for you to finish telling the network about every file in foo first.

    In 0.4.20, we immediately tell the network about
    QmXpXzUhcS9edmFBuVafV5wFXKjfXkCQcjAUZsTs7qFf3G (the foo directory) as soon
    as we finish adding the directory to go-ipfs without waiting to finish
    ๐Ÿš€ announcing foo/{0..1000}. This is especially important in this release
    ๐Ÿ‘ท because we've drastically reduced the number of provide workers.

    The second benefit is that this queue is persistent. That means go-ipfs won't
    forget to send out this record, even if it was offline when the content was
    initially added. NOTE: go-ipfs does continuously re-send provider records in
    the background twice a day, it just might be a while before it gets around to
    sending out any specific one.

    Bitswap

    Bitswap now periodically re-sends its wantlist to connected peers. This should
    ๐Ÿ‘€ help work around some race conditions we've seen in bitswap where one node wants
    a block but the other doesn't know for some reason.

    You can track this issue here: #5183.

    ๐Ÿ‘Œ Improved NAT Traversal

    ๐Ÿš€ While NATs are still p2p enemy #1, this release includes slightly improved
    ๐Ÿ‘Œ support for traversing them.

    ๐Ÿš€ Specifically, this release now:

    ๐Ÿ‘ 1. Better detects the "gateway" NAT, even when multiple devices on the network
    claim to be NATs. ๐Ÿ‘ 2. Better guesses the external IP address when port mapping, even when the
    gateway lies.

    โฌ‡๏ธ Reduced AutoRelay Boot Time

    The experimental AutoRelay feature can now detect NATs much faster as we've
    โฌ‡๏ธ reduced initial NAT detection delay to 15 seconds. There's still room for
    ๐Ÿ‘Œ improvement but this should make nodes that have enabled this feature dialable
    earlier on start.

    ๐Ÿ”„ Changelogs