All Versions
81
Latest Version
Avg Release Cycle
36 days
Latest Release
-

Changelog History
Page 6

  • v0.4.13 Changes

    November 16, 2017

    ๐Ÿš€ Ipfs 0.4.13 is a patch release that fixes two high priority issues that were ๐Ÿš€ discovered in the 0.4.12 release.

    ๐Ÿ›  Bugfixes:

  • v0.4.12 Changes

    November 09, 2017

    ๐Ÿ›  Ipfs 0.4.12 brings with it many important fixes for the huge spike in network ๐Ÿ‘€ size we've seen this past month. These changes include the Connection Manager, ๐Ÿ›  faster batching in ipfs add, libp2p fixes that reduce CPU usage, and a bunch ๐Ÿ“š of new documentation.

    ๐Ÿš‘ The most critical change is the 'Connection Manager': it allows an ipfs node to 0๏ธโƒฃ maintain a limited set of connections to other peers in the network. By default (and with no config changes required by the user), ipfs nodes will now try to maintain between 600 and 900 open connections. These limits are still likely ๐Ÿš€ higher than needed, and future releases may lower the default recommendation, but for now we want to make changes gradually. The rationale for this selection of numbers is as follows:

    • The DHT routing table for a large network may rise to around 400 peers
    • Bitswap connections tend to be separate from the DHT
    • PubSub connections also generally are another distinct set of peers (including js-ipfs nodes)

    Because of this, we selected 600 as a 'LowWater' number, and 900 as a 'HighWater' number to avoid having to clear out connections too frequently. ๐Ÿ‘€ You can configure different numbers as you see fit via the Swarm.ConnMgr ๐Ÿ‘€ field in your ipfs config file. See ๐Ÿ“„ here for more details.

    โšก๏ธ Disk utilization during ipfs add has been optimized for large files by doing batch writes in parallel. Previously, when adding a large file, users might have ๐Ÿ”” noticed that the add progressed by about 8MB at a time, with brief pauses in between. This was caused by quickly filling up the batch, then blocking while it was writing to disk. We now write to disk in the background while continuing to add the remainder of the file.

    ๐Ÿš€ Other changes in this release have noticeably reduced memory consumption and CPU usage. This was done by optimising some frequently called functions in libp2p that were expensive in terms of both CPU usage and memory allocations. We also lowered the yamux accept buffer sizes which were raised over a year ago to ๐Ÿ›  combat a separate bug that has since been fixed.

    ๐Ÿš€ And finally, thank you to everyone who filed bugs, tested out the release candidates, ๐Ÿš€ filed pull requests, and contributed in any other way to this release!

  • v0.4.11 Changes

    September 14, 2017

    ๐Ÿš€ Ipfs 0.4.11 is a larger release that brings many long-awaited features and ๐ŸŽ performance improvements. These include new datastore options, more efficient bitswap transfers, greatly improved resource consumption, circuit relay ๐Ÿ‘Œ support, ipld plugins, and more! Take a look at the full changelog below for a detailed list of every change.

    The ipfs datastore has, until now, been a combination of leveldb and a custom git-like storage backend called 'flatfs'. This works well enough for the average user, but different ipfs usecases demand different backend ๐Ÿ”ง configurations. To address this, we have changed the configuration file format for datastores to be a modular way of specifying exactly how you want the ๐Ÿ”ง datastore to be structured. You will now be able to configure ipfs to use flatfs, leveldb, badger, an in-memory datastore, and more to suit your needs. ๐Ÿ‘€ See the new datastore ๐Ÿ“š documentation for more information.

    ๐Ÿš€ Bitswap received some much needed attention during this release cycle. The concept of 'Bitswap Sessions' allows bitswap to associate requests for ๐Ÿ‘ different blocks to the same underlying session, and from that infer better ways of requesting that data. In more concrete terms, parts of the ipfs ๐Ÿ“Œ codebase that take advantage of sessions (currently, only ipfs pin add) will cause much less extra traffic than before. This is done by making optimistic guesses about which nodes might be providing given blocks and not sending โšก๏ธ wantlist updates to every connected bitswap partner, as well as searching the ๐Ÿš€ DHT for providers less frequently. In future releases we will migrate over more โšก๏ธ ipfs commands to take advantage of bitswap sessions. As nodes update to this ๐Ÿ‘€ and future versions, expect to see idle bandwidth usage on the ipfs network go down noticeably.

    The never ending effort to reduce resource consumption had a few important ๐Ÿš€ updates this release. First, the bitswap sessions changes discussed above will help with improving bandwidth usage. Aside from that there are two important โšก๏ธ libp2p updates that improved things significantly. The first was a fix to a bug in the dial limiter code that was causing it to not limit outgoing dials correctly. This resulted in ipfs running out of file descriptors very frequently (as well as incurring a decent amount of excess outgoing bandwidth), ๐Ÿ›  this has now been fixed. Users who previously received "too many open files" ๐Ÿ‘€ errors should see this much less often in 0.4.11. The second change was a ๐Ÿ›  memory leak in the DHT that was identified and fixed. Streams being tracked in a map in the DHT weren't being cleaned up after the peer disconnected leading to the multiplexer session not being cleaned up properly. This issue has been resolved, and now memory usage appears to be stable over time. There is still a lot of work to be done improving memory usage, but we feel this is a solid victory.

    It is often said that NAT traversal is the hardest problem in peer to peer technology, we tend to agree with this. In an effort to provide a more ubiquitous p2p mesh, we have implemented a relay mechanism that allows willing peers to relay traffic for other peers who might not otherwise be able to communicate with each other. This feature is still pretty early, and currently ๐Ÿ‘‰ users have to manually connect through a relay. The next step in this endeavour is automatic relaying, and research for this is currently in progress. We ๐ŸŽ expect that when it lands, it will improve the perceived performance of ipfs by spending less time attempting connections to hard to reach nodes. A short guide on using the circuit relay feature can be found ๐Ÿ“„ here.

    The last feature we want to highlight (but by no means the last feature in this ๐Ÿš€ release) is our new plugin system. There are many different workflows and ๐Ÿ‘‰ usecases that ipfs should be able to support, but not everyone wants to be able ๐Ÿ”€ to use every feature. We could simply merge in all these features, but that causes problems for several reasons: first off, the size of the ipfs binary starts to get very large very quickly. Second, each of these different pieces โšก๏ธ needs to be maintained and updated independently, which would cause significant churn in the codebase. To address this, we have come up with a system that ๐Ÿ‘ allows users to install plugins to the vanilla ipfs daemon that augment its ๐Ÿ”Œ capabilities. The first of these plugins are a git ๐Ÿ”Œ plugin that allows ipfs to natively address git objects and an ethereum ๐Ÿ”Œ plugin that lets ipfs ingest and operate ๐Ÿ”Œ on all ethereum blockchain data. Soon to come are plugins for the bitcoin and ๐Ÿ”Œ zcash data formats. In the future, we will be adding plugins for other things like datastore backends and specialized libp2p network transports. ๐Ÿ”Œ You can read more on this topic in [Plugin docs](docs/plugins.md)

    In order to simplify its integration with fs-repo-migrations, we've switched ๐Ÿณ the ipfs/go-ipfs docker image from a musl base to a glibc base. For most users ๐Ÿ— this will not be noticeable, but if you've been building your own images based โšก๏ธ off this image, you'll have to update your dockerfile. We recommend a ๐Ÿ— multi-stage dockerfile, where the build stage is based off of a regular Debian or other glibc-based image, and the assembly stage is based off of the ipfs/go-ipfs ๐Ÿ— image, and you copy build artifacts from the build stage to the assembly ๐Ÿ‘€ stage. Note, if you are using the docker image and see a deprecation message, โšก๏ธ please update your usage. We will stop supporting the old method of starting ๐Ÿš€ the dockerfile in the next release.

    ๐Ÿ‘ Finally, I would like to thank all of our contributors, users, supporters, and friends for helping us along the way. Ipfs would not be where it is without you.

  • v0.4.10 Changes

    June 27, 2017

    ๐Ÿš€ Ipfs 0.4.10 is a patch release that contains several exciting new features, ๐Ÿ›  bugfixes and general improvements. Including new commands, easier corruption recovery, and a generally cleaner codebase.

    โšก๏ธ The ipfs pin command has two new subcommands, verify and update. ipfs ๐Ÿ“Œ pin verify is used to scan the repo for pinned object graphs and check their integrity. Any issues are reported back with helpful error text to make error recovery simpler. This subcommand was added to help recover from datastore corruptions, particularly if using the experimental filestore and accidentally deleting tracked files. โšก๏ธ ipfs pin update was added to make the task of keeping a large, frequently ๐Ÿ“Œ changing object graph pinned. Previously users had to call ipfs pin rm on the ๐Ÿ“Œ old pin, and ipfs pin add on the new one. The 'new' ipfs pin add call would be very expensive as it would need to verify the entirety of the graph again. โšก๏ธ The ipfs pin update command takes shortcuts, portions of the graph that were ๐Ÿ“Œ covered under the old pin are assumed to be fine, and the command skips checking them.

    Next up, we have finally implemented an ipfs shutdown command so users can shut down their ipfs daemons via the API. This is especially useful on platforms that make it difficult to control processes (Android, for example), and is also useful when needing to shut down a node remotely and you do not have access to the machine itself.

    ipfs add has gained a new flag; the --hash flag allows you to select which hash function to use and we have given it the ability to select blake2b-256. This pushes us one step closer to shifting over to using blake2b as the 0๏ธโƒฃ default. Blake2b is significantly faster than sha2-256, and also is conjectured ๐Ÿ”’ to provide superior security.

    We have also finally implemented a very early (and experimental) ipfs p2p. This command and its subcommands will allow you to open up arbitrary streams to other ipfs peers through libp2p. The interfaces are a little bit clunky right ๐Ÿ— now, but shouldn't get in the way of anyone wanting to try building a fully peer to peer application on top of ipfs and libp2p. For more info on this command, to ask questions, or to provide feedback, head over to the feedback issue for the command.

    A few other subcommands and flags were added around the API, as well as many ๐Ÿ‘€ other requested improvements. See below for the full list of changes.

  • v0.4.9 Changes

    April 30, 2017

    ๐Ÿ›  Ipfs 0.4.9 is a maintenance release that contains several useful bugfixes and ๐Ÿ‘Œ improvements. Notably, ipfs add has gained the ability to select which CID ๐Ÿ”– version will be output. The common ipfs hash that looks like this: QmRjNgF2mRLDT8AzCPsQbw1EYF2hDTFgfUmJokJPhCApYP is a multihash. Multihashes ๐Ÿ‘ allow us to specify the hashing algorithm that was used to verify the data, but it doesn't give us any indication of what format that data might be. To address that issue, we are adding another couple of bytes to the prefix that will allow us to indicate the format of the data referenced by the hash. This new format is called a Content ID, or CID for short. The previous bare multihashes will still ๐Ÿ‘ be fully supported throughout the entire application as CID version 0. The new format with the type information will be CID version 1. To give an example, the content referenced by the hash above is "Hello Ipfs!". That same content, in the same format (dag-protobuf) using CIDv1 is zb2rhkgXZVkT2xvDiuUsJENPSbWJy7fdYnsboLBzzEjjZMRoG.

    ๐Ÿ‘ CIDv1 hashes are supported in ipfs versions back to 0.4.5. Nodes running 0.4.4 and older will not be able to load content via CIDv1 and we recommend that they โšก๏ธ update to a newer version.

    ๐Ÿ”Œ There are many other use cases for CIDs. Plugins can be written to ๐Ÿ‘ allow ipfs to natively address content from any other merkletree based system, such as git, bitcoin, zcash and ethereum -- a few systems we've already started work on.

    Aside from the CID flag, there were many other changes as noted below:

  • v0.4.8 Changes

    March 29, 2017

    ๐Ÿ›  Ipfs 0.4.8 brings with it several improvements, bugfixes, documentation ๐Ÿ‘Œ improvements, and the long awaited directory sharding code.

    Currently, when too many items are added into a unixfs directory, the object gets too large and you may experience issues. To pervent this problem, and generally make working really large directories more efficient, we have implemented a HAMT structure for unixfs. To enable this feature, run:

    ipfs config --json Experimental.ShardingEnabled true
    

    And restart your daemon if it was running.

    Note: With this setting enabled, the hashes of any newly added directories will be different than they previously were, as the new code will use the sharded HAMT structure for all directories. Also, nodes running ipfs 0.4.7 and earlier will not be able to access directories created with this option.

    That said, please do give it a try, let us know how it goes, and then take a look at all the other cool things added in 0.4.8 below.

  • v0.4.7 Changes

    March 15, 2017

    Ipfs 0.4.7 contains several exciting new features! ๐Ÿ”€ First off, The long awaited filestore feature has been merged, allowing users the option to not have ipfs store chunked copies of added files in the blockstore, pushing to burden of ensuring those files are not changed to the ๐Ÿ‘‰ user. The filestore feature is currently still experimental, and must be enabled in your config with:

    ipfs config --json Experimental.FilestoreEnabled true
    

    ๐Ÿ‘€ before it can be used. Please see this issue for more details.

    ๐Ÿ”€ Next up, We have merged initial support for ipfs 'Private Networks'. This ๐Ÿ”‹ feature allows users to run ipfs in a mode that will only connect to other peers in the private network. This feature, like the filestore is being ๐Ÿš€ released experimentally, but if you're interested please try it out. Instructions for setting it up can be found here.

    ๐Ÿš€ This release also enables support for the 'mplex' stream muxer by default. This stream multiplexing protocol was available previously via the --enable-mplex-experiment daemon flag, but has now graduated to being 'less experimental' and no longer requires the flag to use it.

    ๐Ÿ›  Aside from those, we have a good number of bugfixes, perf improvements and new โœ… tests. Heres a list of highlights:

  • v0.4.6 Changes

    February 21, 2017

    ๐Ÿ›  Ipfs 0.4.6 contains several bugfixes related to migrations and also contains a few other improvements to other parts of the codebase. Notably:

    • 0๏ธโƒฃ The default config will now contain some ipv6 addresses for bootstrap nodes.
    • ๐Ÿ“Œ ipfs pin add should be faster and consume less memory.
    • ๐Ÿ“Œ Pinning thousands of files no longer causes superlinear usage of storage space.

    • ๐Ÿ‘Œ Improvements

    • ๐Ÿ“š Documentation

    • ๐Ÿ›  Bugfixes

    • ๐Ÿ”จ General Changes and Refactorings

    • โœ… Testing

  • v0.4.5 Changes

    February 11, 2017
    ๐Ÿ”„ Changes from rc3 to rc4
    ๐Ÿ”„ Changes from rc2 to rc3
    ๐Ÿ”„ Changes from rc1 to rc2
    ๐Ÿ”„ Changes since 0.4.4
  • v0.4.4 Changes

    October 11, 2016

    ๐Ÿš‘ This release contains an important hotfix for a bug we discovered in how pinning works. If you had a large number of pins, new pins would overwrite existing pins. ๐Ÿš‘ Apart from the hotfix, this release is equal to the previous release 0.4.3.

    • ๐Ÿ›  Fix bug in pinsets fanout, and add stress test. (@whyrusleeping, ipfs/go-ipfs#3273)

    ๐Ÿš€ We published a detailed account of the bug and fix in a blog post.