v0.7.9 ChangesMay 15, 2020
🚀 The primary focus of this release is alternative s3 support for multimedia assets and a mastodon timeline feed.
🍱 There is a new section in the preferences page for S3 Asset Storage. If you fill out the details here, when you upload large assets like audio and video files, they will be uploaded to this bucket instead of your main bucket. This is mainly for taking advantage of lower cost S3 compatible storage providers like Wasabi who doesn't charge money for egress. If you are supporting a large infrastructure with lots of downloads, this could save you a ton of money.
Also, on the prefs page, in the Mastodon section, you can now subscribe to your Mastodon timeline (toots, mentions, etc.). You can also put in a text string as a filter and only posts containing that string will be included in your feed. This lets you bring a tamed/focused version of your timeline into your workflow.
🛠 Standard improvements and bug fixes as always.
v0.7.8 ChangesSeptember 26, 2019
🚀 The primary focus of this release is the aggregator and feed item scanner. This isn't a "sexy" release at all. It's all, boring, under the hood stuff that just needed doing for a long time. The feed scanner is now asynchronous like the feed puller is. This will make things go much faster. There were also a lot of edge cases I've been noting in cart'ing articles that are now fixed.
v0.7.7 ChangesSeptember 17, 2019
🚀 The primary focus of this release is on token support with cartulizing articles, with a couple of other bells and whistles thrown in.
↪ You can now send a url to be cartulized by sending a POST to "your.cart.server/cgi/in/cartulize" with a private token as a parameter. You can use this API to send articles to your server to be archived from any other app or service or workflow. The most obvious use for it right off the bat is to use it with Apple's IOS Shortcuts app to create a share sheet item for cartulizing urls. This lets you work around those times when an app doesn't show Safari as a share target, or when a particular site blocks access from your cart server, or when a site restricts bookmarklet activity with a CSP policy.
👍 If someone comes up with cool uses for this I'd love to hear it. Post it in the SOPML google support group.
v0.7.6 ChangesApril 07, 2019
🚀 The primary focus of this release is on editor improvements and features, and Ubuntu 18 LTS support.
The big new feature is editor templating. You can now use placeholder variables in your outline and save it as a template. Then you can use that template to generate dynamic outlines that use the template as a boilerplate. It's got the beginnings of a nice language going, so you can use modifiers on your variables like ++, --, jpg256, gif128, etc. to make those variables expand to new values.
👍 We're also now compatible with Ubuntu 18, which was a major pain. Supporting 14, 16 and 18 simultaneously blows.
v0.7.4 ChangesApril 16, 2018
🚀 The primary focus of this release is on improvements to article content extraction and workarounds for problem feeds in the aggregator. Since we added a new article extraction engine last time, the number of articles we are able to extract improved. But, even so, the quality was sometimes poor. I spent a lot of time regex'ing and css'ing to get the format of the articles as presentable as possible.
👍 The goal, in general, has always been to take any document and pare it down to it's most basic structure. That means, no DIV's, no html attributes, etc. Just the most vanilla, basic html you can imagine. It always looks better that way.
v0.7.3 ChangesMarch 29, 2018
🚀 This is a bug fix release for a problem with the news river page. This reverts the real-time pulling and puts it back to river.js/jsonp for now until I can work out what the problem is.
🚀 This release also includes some tweaks to cartulizing to improve extraction.
v0.7.2 ChangesMarch 27, 2018
🚀 This release is again focused on the aggregator, but with some significant cartulizing improvements also.
👍 The aggregator has been steadily getting better over the years, but still needed a thorough going through to just tighten things down and make things more efficient. That's what was done. It was not just one thing, but dozens of small things to keep it from duplicating work or doing more than necessary. I'll document those things in a blog post. It's interesting stuff.
✅ The heart and soul of this system is the Cartulizing engine that saves articles as you read them in your news river. It's why this thing exists in the first place. There were plenty of times that this article saving process (we call it "cartulizing") would fail. So, I began testing the php port of Mozilla's readability library and it worked well on some articles that the FiveFilters engine fails on. Instead of simply replacing one with the other it now tries readability first and, if that fails, redirects to FiveFilters. We now get two full shots at grabbing the article.
🐎 Another change this time, is that the river page now pulls directly from the live database instead of pre-built river.js files. Eventually I'd like to make river.js file building optional instead of default to further enhance server performance. That's the plan.
v0.7.1 ChangesNovember 22, 2017
🚀 This release is mainly focused on the aggregator and how it handles dead feeds. I define a dead feed as either having a fatal HTTP response code (400's or 500's) or having an XML parsing error that prevents any usable data from being extracted from the feed.
We keep an error counter for each feed and increment it by either 4 for a 4xx response, or 5 for a 5xx response. We increment by 1 if the feed had a fatal parsing error. Once per day, the "clean_bad_feeds" script runs and looks for a set of conditions, which if all are met, the feed is marked as dead in the newsfeeds table.
The death conditions are: error count greater than 1000 [AND] the last http response code was 400 or greater [AND] the last time a sub-400 response code was received was more than 30 days ago.
Upon death, the feeds are still not deleted. Instead they are just marked dead. This way their historical items are still searchable, they just don't clog up the aggregator anymore.
I use some fake HTTP status codes to indicate other errors when pulling the feed like if the connection is reset or the request times out. These are in the 900's and each increment the error counter by 1. 'ENOTFOUND' increments by 10, because the hostname doesn't exist anymore.
v0.6.17 ChangesJuly 04, 2017
🚀 This release focuses on the editor again. There are quite a few improvements to basic usage like prompting if you're sure you want to leave the page if you have unsaved changes. Also, if you change the title of an outline and hit save, you'll be asked if you want to save as a new outline or overwrite the current one.
🚚 There are 2 big editor features: private outlines and un-archiving include nodes. You can set an outline as private and the outline will be removed from S3 and only accessible through the private token url provided in the editor toolbar. If you un-archive an include node or set of include nodes the content from those includes will be sucked back into the outline and set as normal nodes.
v0.6.16 ChangesJune 19, 2017
🚀 I fixed a major bug with S3 in this release. The newer AWS regions like Frankfurt, and a few in the US, no longer support API signing with signature v2. I haven't created a bucket in a region other than us-east-1 in so long that I didn't realize this was a problem.
🚀 The big new feature with this release is version history in the editor. I'm sure you're like me and have accidentally overwritten an outline before by hitting save instead of "save as...". Well, now you can recover by hitting the little time machine icon in the editor toolbar and going back to any previous version. It only saves a new version if the content of the outline actually changes.
👍 Also, the aggregator and subscribe bookmarklet now support JSONfeed feeds.