Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What if we had Local-First Software? (adlrocha.substack.com)
291 points by adlrocha on Oct 15, 2020 | hide | past | favorite | 144 comments


I have a lot of experience with this having developed a personal finance app that is local-first: https://actualbudget.com/

Having all of the data local is a superpower. It changes how you build the app, and you just never have to worry about the network. Everything is fast _by default_. It's great.

The app uses CRDTs to sync and it's unlocked a lot of powerful stuff like a robust undo system. We also offer end-to-end encryption.

On the negative side, over time I've stepped back a little from being truly local-first. The mobile app used to connect to the desktop by pointing the phone's camera at a QR code on desktop, and it would literally sync peer-to-peer. Maintaining that was nightmare and users had endless networking problems. Our network infrastructure unfortunately is not built for truly peer-to-peer apps. Maybe ipv6 will help, I don't know. My app now has a "centralized" server but it's really just a backup that gives some convenience - the app is still totally local but syncs through a single server. You can work on it for weeks offline and then sync up. In the future when p2p is ready, I'll be ready for it.


The app looks great. What technologies did you use to develop it?


The app's UI is consistent across all platforms. Such delightful colours and design. What an impressive execution!

GP, I'm curious about the frontend stack as well.


@avrionov @ignoramous Thank you!

The frontend for desktop/browser React, and Electron is used to create standalone apps. I don't love Electron, and if you don't want it a browser version is available too: https://app.actualbudget.com/

The mobile app uses React Native and has a custom UI just for mobile, but it shares styles and a few small components with web. I don't believe that you can build one UI that works well on both desktop and mobile.

All platforms share the backend code which runs in a separate process/thread. On mobile, I use https://github.com/JaneaSystems/nodejs-mobile to run the JS backend locally.

That's mostly it. No UI frameworks are used. I love UI/UX design so I built it from the ground up to be as usable and performant as possible.


I cannot imagine my life without Actual. Bookkeeping is such a joy with this tool. I've used Acutal for more than a year to manage the budget for two companies and my personal finances. Highly highly recommend.


Wow, that's really great to hear! There's so much more I'm excited to build into it, so it will only get better. Something I'm most excited about is custom reports. Having all your data local is going to make those super powerful.


This looks great and similar to YNAB. Do you feel you offer anything over YNAB vs just being local first.


Right now, the differences are all in the details (income categories, no weird credit card handling, etc), and there are many of them. But instead I'll focus on what is to come soon:

* A custom rules engine (to be released in a few weeks). You'll be able to write a list of conditions for matching transactions, and a list of actions to apply when matched. The system will automatically encode what it learns as rules (apply category X to payee Y) so you can see what it's doing and adjust.

* Multiple budget types. Zero-based budgeting is cool, but Actual is just a tool for you. If you want, you can use a simple report based that just shows income vs expense. You choose the type you want adapt it to your lifestyle.

* Custom reports. This is really where having all your data local is incredible. You'll be able to write queries into your data, process it, and render any kind of visualization.


> You'll be able to write queries into your data, process it, and render any kind of visualization.

Now that sounds very cool.

Out of curiosity, do you have any plans to handle things that cannot be local first (for example with YNAB I like the Plaid Integration to pull transactions data out of banking).

Something like that for your application would need to clear through your own API right?


Yep, bank syncing is going to launch by the end of the year.

It's tough. I've wrestled with this for over a year. Bank syncing has to come from my server, which means by doing so you are giving us access to your transactions. No banking provider has any way for clients to contact them directly. I think this is possible with the right encryption, but nobody is working on it. I've given this feedback to Plaid but they don't care.

The majority of people need the convenience of syncing though. So I plan to launch it, and it'll go through my server, and I plan to communicate clearly the privacy that you are giving up by doing so.


I assume by mentioning Plaid, you'd be using their services to extract the transaction data?

I keep hoping to find a (maintained/viable) open source project that uses headless browsing to login locally and extract transaction details.


I'm also interested in automated exports performed locally.

For my credit union, I "reverse engineered" the API and export multiple times throughout the day.

I wrote an extension that exports the data for CapitalOne, but I haven't gotten around to trying it either headless or even just in any automated fashion.

Easy automated export of user data, even beyond financials, is something I'd like to see more of. Feels like it could be a workaround while there's so little decentralization.


@jpeeler @Marksort I wish the same. I've also given feedback to Plaid that they should implement an encryption scheme that allows data to be encrypted from Plaid to the user, so at least apps which sit in the middle couldn't read your data. Nobody is interested in that though.

There _sort of_ is a solution where your computer directly connects to the bank. The format is called OFX (https://github.com/libofx/libofx), and there is a directory of banks that provide these files directly online. This site (https://www.ofxhome.com/) lists the URLs to use for each bank.

I used that for a while many years ago. But it's terrible and requires massive maintenance. For an app that requires connections to arbitrary banks, there's no way developers can support these direct downloads. There's always different errors in data for different banks, etc. Unfortunately, Plaid is solving a real problem


Personally, if I can get a YNAB4-equivalent that will work with macOS 10.15+ I'd be very happy. (YNAB4 is 32-bit only, and YNAB5 is subscription—and since I have no need for the cloud stuff I refuse to rent it; I do have a license for YNAB4.)


I've actually been looking for an alternative to YNAB since they forced you into a cloud subscription model with data hosted on their servers. I mean, they didn't force, but they essentially stopped updating the standalone product.


I'm curious whats your definition of "ready".

I'm doing a similar quasi-local-first project with p2p. Ive been toying with libp2p[1], which underpins a number of distributed storage solutions, and originated from ipfs.

> [1]: https://libp2p.io


Maybe you don't need to use the QR code to establish a connection? Instead you could just use the QR code to transmit whatever data the phone might need from the computer. If it's just a finance app, surely the data would be small enough to fit into a qr code, maybe show a series of codes if the data is really big?


Fun idea! Just looked it up, 3kb is the max data in a QR code, and data _definitely_ gets bigger than that. If you import 100 transactions it'll probably be bigger than that. Also the syncing mechanism requires a two-way connection: they will communicate changes back and forth until they are synced up.


> Maintaining that was nightmare and users had endless networking problems. Our network infrastructure unfortunately is not built for truly peer-to-peer apps

What causes these problems? NAT?


Yep, that and public routers blocking types of protocols. There was a phase where I switched to using mDNS (https://en.wikipedia.org/wiki/Multicast_DNS) so you didn't need a QR code, which discovers other devices. Turns out most public wifis (coffee shops) flat out block it.


What protocol were you using before that? Had you looked at using webrtc?


WebRTC is a horribly fantastic mess of complexity which is very hard to work with without using a third-party provider like https://simplewebrtc.com. Your head will be swimming with acronyms like ICE, TURN, STUN, etc. before you even understand the basic signalling mechanisms.



Feross makes really great libraries. WebTorrent, which is basically an extension of simple-peer is fantastic. See https://webtorrent.io.


Compared to many different programming things its really easy. You just have to understand that their reason is to fix all networking problems. Its really _NOT_ complex!


I've never found a good explanation of how it all works - Something that will cover everything from the high level reasons down to explaining what every byte in a typical packet means.

It's just a mess of documents referring to other documents for details.

I'm sure it all makes sense if you just spend a few months making sense of it all, but it's very not much accessible to someone looking do it as a weekend project.


You first stated that these concepts are hard, which is really not the case. Many more terminology is really good explained here: https://webrtcglossary.com/

Not understanding the byte protocol is understandable but a quite different topic. You really don't need to understand it, there are libraries for many programming languages doing the hard work for you. And if you want to understand it, it's normal you have to read a lot of different documents. If you would have no understanding of HTTP/2 you would have to read a lot of RFCs too as every version is built on the former version with retaining many of it's concepts


> You first stated that these concepts are hard, which is really not the case.

That was another person, I don't struggle with that.

> If you would have no understanding of HTTP/2 you would have to read a lot of RFCs too as every version is built on the former version with retaining many of it's concepts

The turd doesn't smell less because you managed to find another.

The reason I wanted to read about it was because some sources say STUN requires two public IPV4. I wanted to know what would happen if I had only one. Or if I had two IPV6.


What about running a default HTTP Proxy VPN for it?


I'm not exactly sure what you mean. Wouldn't that be a centralized thing that everybody has to go through?


This looks great, do you have a newsletter? I'd love to know when you implement bank syncing. That's the only thing keeping me from signing up right now.


Thanks! Sort of do but unfortunately you can only subscribe to it right now by subscribing to a trial. If you email help@actualbudget.com I'll subscribe your email manually.


I've seen some ways you can hack this functionality together with email alerts for any > $0 transaction.

Does Actual expose any api stuff?



I have 15 years of my personal budget in a double-entry system; is the import API sufficient that I could import that data?


Did you consider spinning up a Tor hidden service on the desktop app and sending the onion address and credentials via QR to the mobile app?


We've come full circle. The web developers are starting to realize there's a whole operating system underneath the browser that might actually be of some use.


Web applications can be run locally.

As much as it's easy to shit on the web stack, it solved a real problem, namely the 'operating systems underneath' being absolute clusterfucks at least as much as the web. It's really sad that the only way to reliably access something from multiple devices and locations are websites.

Ironically, if we're ever going to see something like the OP wants, it's 100% going to be locally-run webapps. Whether we like it or not, think it's right or not, it's clear by now that the average joe is not going to learn tools like git.


OSes may be technical clusterfucks. However, the most important thing imho is that all this shit is running on a stack that's robust against vendor lock-in. That's easier with a smaller stack since you have less vendors to worry about. LFSFS (local-first software from scratch), if you will.


That's why I am so exited about gioui.org because it'll allow native applications for OSX, Windows, Linux (both X11 and Wayland), iOS, android and (sortof) webASM.

It is a simple wrapper around the input devices like mouse, keyboard and touch screen and outputs the graphics using GPU shaders (DirectX, OpenGL, Metal). It feels like a mix between a 2D game engine and a UI library.

The immediate mode architecture is a mindfuck how easy it is compared to (functional) reactive programming. You basically get all the benefits of having observers of values with just values.


The usual problems are caused by the non-native controls: no built-in affordances like spell checking and all the OS-specific behaviors, no easy way to plug in accessibility, and the non-native look and feel.

All this is totally fine for a game UI, and is a much harder sell for productivity software.


Thats a little bit unfair, even the W3C's web-agency has a hard time building an accessible website and needed spend two weeks (of billable time) reviewing different web frameworks because of it:

https://w3c.studio24.net/updates/on-not-choosing-wordpress/


Not that many people need accessibility. About 100% of people need spell-checking, though.


Certain types of automation and testing are often built on top of accessibility features. Accessibility doesn't just mean accessible to the disabled. It also means accessible to your code.


Exactly. And people forget that depending on the UI libraries you use, you might get a mostly accessible app by default. All of core Windows widget classes, and WinForms and WFP, support accessibility features. Worst case you have a somewhat broken object tree without good labels, but that's already a start.

Dealing with something that believed it only needs a canvas to draw on is PITA.


> Not that many people need accessibility.

100% will require accessibility in some form if your timeframe is long enough. We aren't young forever, you know.


> All this is totally fine for a game UI, and is a much harder sell for productivity software.

Is it? Web and electron applications have the same issue, and they seem to be doing fine.


Flutter is similarly building a canvas based rendering system for the web, called CanvasKit.

I think it's neat that folks can skip the conventional web technologies & hypertext & use the web to create arbitrary systems. But I also think this is quite a malignant & user-de-empowering & tragic development for the web. The DOM has been extremely powerful as a common language, that users can make use of directly & via extensions & scrapers. Stealing this fire back from the users, renegging on the web's promise that it's the user's agent, for them to use to navigate the web as they like: that's a great & sad regression.

In my opinion the web is worthwhile & useful. Bypassing it, treating the web like thin-client, a way to push pixels in people's face, via some auxiliary rendering process, be it local (webassembly) or remote, is a terrible usurpation of user's agency. Projects like Flutter's CanvasKit & others are a great danger to one of the few piece of technology that has enabled users some level of control & freedom, the web.


I started playing around with gioui the other day. It looks like it has a lot of potential. I was curious if it could redraw efficiently enough to be used as a go equivalent to pygame. Have you experimented with anything that requires a high refresh rate? I guess I need to finish my experimenting and find out for myself.


The web developers are starting to realize there's a whole operating system underneath the browser that might actually be of some use.

The article doesn't mention anything at the OS level. Every technology suggested runs in the browser.


I'd be very curious to know what percentage of total aggregate CPU and memory resources are in the hands of everyday people (in smartphones) compared to in datacentres.


1 gigabyte over a billion cellphones turns into a exabyte, so I would guess, similar orders of magnitude?



Back-of-the-enveloping it I have a hard time seeing how you'd arrive at more than low-100's of exabytes total storage across the world's supply of cellphones -- there's 1.4 billion active iPhones and 3.75 billion other phones, so even if you assume that the average iPhone has 32GiB and the average non-iPhone has 16GiB (which is pretty generous considering a large chunk of those are probably feature phones), you only get up to ~100 EiB. To break 2000 EiB you'd need each cellphone to have more than 400 GiB of storage, which I'm pretty sure is very much not true, unless my phone is older than I thought it was.


That's storage, not memory.


Yeah this article mostly makes me feel very old.


FINALLY!


I've been soaking in this space for the past couple years as I've developed PhotoStructure, a local-first photo and video manager [1]. I have opinions.

While I'm certainly sold, personally, on local-first software (obviously), I think it's going to become an increasingly harder sell for Normal People. 15 years ago, a large percentage of households had at least one desktop or laptop, as it was the only way to get on to the internet.

As smartphones have taken that place for most people, nowadays if you've got a desktop, it's likely only for gaming, and most commonly-purchased laptops (esp that don't come with Windows) have stagnated in performance and capability, as their companies push people to use their cloud service offerings. NAS devices seem to remain niche products. This drove me to have to support everything: Windows, macOS, Linux desktops, and everything that can run Docker. I've been very surprised to find an almost perfect split of users for each edition (desktop vs docker).

Given the popularity of personal-information-selling social platforms (however it may be waning), I don't think most people are concerned about privacy to give up (almost any) convenience. I'd love to be wrong. I'm hoping to make something easy enough to install and live with that it isn't an inconvenience, but it's hard to compete against enjoying someone else's infrastructure and application management teams, for "free" (where "free" === all of your personal information and telemetry).

(For what it's worth, PhotoStructure only runs on hardware you own, and none of your data leaves your computer, except for error reporting, and even that can be disabled).

[1] https://photostructure.com/


Local first software makes even more sense on a smartphone than a desktop.

My desktop is always network connected, and my internet connection at home is fairly reliable. My smartphone is often in areas with no coverage. There are tons of apps I can't use because they don't keep local data. It would be a perfect time to read and write email, but the platforms that would let you do that are dead.


>It would be a perfect time to read and write email, but the platforms that would let you do that are dead.

The default mail app on iOS is perfectly capable of locally writing and downloading/reading email. Can't search all of history since that's in the cloud but for recent usage it's fine.


(there are good imap clients, at least for Android, which support a sometimes-connected network).


Your product looks interesting to me. My minor beef is with recurring annual subscription ( assuming no updates, I just want to pay once ). I do like the idea though.


> assuming no updates

Know that the free tier will still open and browse existing libraries: I'm not going to hold anyone's library hostage in exchange for a subscription.


> How would an offline-first Internet look like?

Did people not grow up on Juno Email services?

Oh wow. An entire generation of modern computer users never had the opportunity, did they? You'd connect to the internet to download your email, then you disconnected (otherwise, you'd use up too much of your phone line and get a big phone bill).

The POP3 protocol still works. You can still get Mozilla Thunderbird and grab POP3-style emails from GMail or whatever. But people want IMAP instead, an always-connected model provides better idea of which emails were "read" or "not read", for example.

Ex: You may download all emails on your Phone, but maybe Email#0 is never opened on your phone. If you used POP3, you don't know if Email#0 was read or not, the protocol just assumed you read it. POP3 was more useful back then, when you only had one computer of any sort and didn't have to worry about keeping multiple devices in sync.

-------

How did forums work back then? Through USENET. A similar concept, where you'd download the messages, then disconnect from the internet as you read all the updates.

BBS machines however were online-only, and the start of real-time collaboration IIRC (not that I ever used BBS, but that's my understanding of that old telnet technology). I don't know the history: whether BBS or IRC came first (or which was popular first...)

The "Killer App" for the internet, AOL Email / Juno Email / etc. etc. was offline-first, by and large. At least, to me (a good ol' "Eternal September" user who joined the internet community well after 1993).

--------

Anyway, you'd read and write emails while offline. Then you'd hit "send", which would boot up the modem, take over your phone line (woops, Mom's talking on the phone. Sorry mom!!). Wait for Mom to get off the phone, THEN you connect to the internet...


> Anyway, you'd read and write emails while offline. Then you'd hit "send", which would boot up the modem, take over your phone line (woops, Mom's talking on the phone. Sorry mom!!). Wait for Mom to get off the phone, THEN you connect to the internet...

And then send the whole batch all at once.

You can still do it if you want to.

fetchmail to handle the mail retrieval. Add notmuch if you want fancy indexing. Any local client to compose. Delivering is a bit annoying to setup initially in my experience.

https://notmuchmail.org/software/


> The POP3 protocol still works. You can still get Mozilla Thunderbird and grab POP3-style emails from GMail or whatever. But people want IMAP instead, an always-connected model provides better idea of which emails were "read" or "not read", for example.

Well, IMAP isn't always-online, it supports it but you can just as easily use it, download the messages, and disconnect. With IMAP you do get push notifications for email instead of polling, though, which is nice.


I am "only" 29 years old but I remember that. Using Outlook Express in Windows 98 to download the emails on the morning (only one or two), and then write one or two and wait for it to being sent at night when I connect again. Lovely memories, thanks for triggering them!


It was similar with FidoNet, which I had before I could afford internet in Russia. Every day you'd dial what was effectively a BBS (some dude with a modem on his home phone line), download the personal messages and forums (echomail), then read/respond offline, and if you wanted connect back to send your responses.

Ah, those were the days. No endless distractions of always being online :)


Uhhh

Have any of y'all ever heard of something called "Adobe Creative Cloud", it's...

It is literally trying to answer most of these questions for a set of titanic applications that predate the entire world wide web, it solves, offhand, at least five of these seven principles - it's (1) built around apps that have been working with local files since the eighties, (2) can sync your work between multiple devices, I just carry my laptop everywhere so I dunno how well it works, haven't played with the iPad apps yet so I dunno about those, (3) works just fine with no net access (though it'll cut you off if you don't hit the authorization servers every few weeks), (4) I am not sure how it solves the conflicting changes problem as I work solo, (5) your data's on your local device and in Adobe's cloud for as long as you subscribe, (6) okay well stuff's probably stored unencrypted, unless "I turned on whole disc encryption" counts (7) it's your own files and your own backup strategy.

That's five checkboxes out of seven, better than the 4/7 that "email shit around" and "github" get.

And it is relentlessly uncool. But it has been here for a good while and none of y'all have noticed it because it's built for artists.


Last I checked, creative cloud isn't truly offline first. How many times have I opened Illustrator on an airplane, only for the creative cloud to completely block opening my local application till I don't sign in onto the cloud (mostly to verify if I paid for my subscription this month). That's not what an offline first software should look like.


I want to own my data and my tools.

I don't want to pay a subscription service to Adobe any more than I want to pay a subscription service for my desk and chair.


The tough thing is that most software products you use are actively maintained and updated. That costs them money each month, so they pass those costs to you.

It would be great if we could see more software sold as-is, without updates. But, especially in the world of online-capable tools (which not everything is, but most apps can access the internet), ongoing security maintenance is important. I'm not sure we can have it both ways.


I haven't given it much thought, but I like Jetbrain's compromise[0]. You pay a subscription to get all the updates, but you get to keep a perpetual license to any specific version (major+minor, you still receive bugfixes) if it has been covered by your subscription for 12 months or more.

This way you can choose to receive updates indefinitely and support development with your subscription, or buy/stay with a version you are using if you do not need/want the newer ones. (Buying an annual subscription grants you the perpetual licence immediately apparently, I have no experience with this.)

[0] : https://sales.jetbrains.com/hc/en-gb/articles/207240845-What...


Agreed! I think that's a great model. Panic did the same thing recently: https://nova.app/buy/


> The tough thing is that most software products you use are actively maintained and updated. That costs them money each month, so they pass those costs to you.

Adobe managed to stay in existence and be profitable for a few decades before switching to software rentals.

What changed in the creative space that caused 'traditional' software sales to no longer be able to pay the bills? And why doesn't that also apply to, say, Capture One?


A lot of apps rarely still introduce compelling-enough new features to justify buying a newer version. We'd all mostly be fine today using CS5 and Office 2007.

Subscription models are about forcing upgrades in a world where users increasingly aren't actually benefitting from them.


I was never a paying user of Adobe products, but my understanding was that they'd release a new (yearly?) version as a paid upgrade. It would presumably have a killer feature that would encourage enough of the users to buy, which (hopefully) retroactively funds the cost of the new version's development.

From the perspective of their business, it's advantageous to have the consistency of $X / month forever instead of "hopefully a huge yearly sales month, followed by mostly nothing until next year". Businesses like predictable revenue.


> Businesses like predictable revenue.

Then make the subscription worth something over and above the software:

* https://www.captureone.com/en/products-plans/single-user/cap...


> What changed

Businesses often prefer operational expenses over capital expenses. What changed is that your average business IT (bigcorp or mom's basement) now has reliable access to the internet, which means that an operational expense for software has just recently become possible at all.


And why did that preclude Adobe from also allowing actual purchases of their products instead of just the rental model?

Capture One has subscription and license:

* https://www.captureone.com/en/products-plans/single-user/cap...

If I had a need for, say, cloud/sync features I understand a monthly cost as servers and bandwidth costs money; totally on-board. But for people who do not need/want cloud-y stuff, what re-occurring cost is there?

I was happy to purchase YNAB4 for budgeting, and would have purchased YNAB5—but it's subscription and I have no need for the sync stuff that's part of the 'deal'.


It's a lot easier to develop a single software product continuously by tying continuous development to continuous revenue. It frees the business to devote its resources to development instead of product sales strategies and version differentiation which just distract it from focusing on its core product. This also enables their customers to do the same thing, tying continuous use of a service to continuous production of its own product or service. If a business can completely outsource the problem of providing good tools to their professional digital artists by paying Adobe $Xk/year, it might even be worth it to them financially just to save everyone the time of deciding whether to upgrade or not every other year. OS updated? New file format? Solved. New employee? Employee left? Solved.

This is like IT infrastructure moving to cloud: businesses decide that the problems of maintaining IT infrastructure 100% in house is not worth it when you can outsource a big chunk of the business problem for a known cost. In my opinion all of these are examples of the same trend that also includes the direction that k8s, serverless, even companies like uber, doordash, wework etc are moving towards: a global relaxation of artificially maintained buffers down to solving problems continuously, automatically, and on-demand. (Since it begs the question, my critique of uber, et al. is that they're trying to be too big for their problem.)

I think this is the general trend, but I don't know why. My guess is that it is vastly more efficient; not more efficient in cost, but more efficient in time.


Would it be too reductionist to say then that the main value of a subscription as opposed to just buying the major versions you want is that you don't have to pay the costs for a decision that isn't impactful? If not, could an org not otherwise reap those same benefits by just choosing to upgrade every major version, but then retain the flexibility to not shell out for a new version if a pandemic temporarily hurts finances or something?


Maybe they don't need to be though. One of my greatest regrets is that I couldn't afford to purchase Photoshop and Illustrator back when that was still an option. The Adobe CS6 suite is software that is "done" as far as I'm concerned. I would gladly pay $1000 each for CS6 Photoshop and Illustrator with absolutely no future updates if I could.


Too often, those updates break already working tools. I don't need my essential app insisting that I wait 10 minutes to upgrade when I'm 1 minute away from giving a presentation using it.


> I want to own my data and* my tools.*

More power to you! But this article is specifically about local-first/offline SaaS, which is what Creative Cloud is.


> Have any of y'all ever heard of something called "Adobe Creative Cloud", it's...

…something that I have to rent and can no longer own.

See also losing user's data:

* https://www.theverge.com/2020/8/20/21377411/adobe-lightroom-...


Adobe Creative Cloud is probably the worst sync software I ever encountered. On Mac and on Windows. It's probably even way worse than a one week demo with Windows Cloud Sync Engines. Most Adobe products are crap, slow and have tons of bugs. Their specification around pdf is so complex that even adobe reader does things differently than the spec.


But you still don't own the tools. Can't run Photoshop if Adobe decides they can't, or don't want to, serve you anymore.

There are some other comments that highlight why this matters, but I want to add to this: a year ago, the US government decided that they very much don't like Venezuelan government anymore, and issued sanctions. Adobe promptly announced that they'll cancel all subscriptions in Venezuela, no refunds - essentially shutting down the entire creative industry in the country. Now to their benefit, they've managed to get an exemption from the White House just before deadline and eventually did not cancel Venezuelan subscriptions - but was the US government a bit more stubborn, this could have played out in entirely different way.

Point being, a hard dependency on a non-commodity cloud service introduces a new class of problems that locally owned software doesn't have.


You need internet access to use creative cloud.

Cracked creative cloud is a great product, but the creative cloud they sell in stores is really bad. There's nothing worse than being 2,000 miles out to sea with a bunch of footage to edit and no internet to check in with the copyright parole officer.


I completely agree, but I think the days might be numbered.

In Lightroom what you're describing, with the easy to work with plain files, is called Lightroom Classic. It's still supported alongside regular Lightroom, but it wouldn't be a huge surprise if it gets deprecated at some point. It also only works on desktop.

What is now called regular Lightroom requires Adobe cloud to sync between machines instead of letting you manage your own files & syncing, and is the only way to sync between the newer iPad version of Lightroom or Photoshop and the desktop.


Most of these points are also true for Microsoft Office.


The basic problem here is not a tech problem, it's a business problem.

Your data is siloed inside a cloud-only app because it means you'll keep paying for it. You can't export it or share it to other apps because that'd be competition for the service you're paying for. It's online-only because it makes the entire service useless once you stop paying.

There's technical solutions for this, and they'll get no traction whatsoever so long as it's more profitable to keep people locked into a cloud-based service.


It's not just about the recurring-payment part either which makes for a simpler business model to fund updates. Software as a cloud service also makes it easier to provide support to users, it's impossible to pirate, and it's easier to support most platforms.


Working at a hosting provider in the past I realized I'd be out of a job if web browsers implemented native support for torrents real fast.


I tried to create a collaborative (the most important point), local-first, end-to-end encrypted app.

I really tried. But it's hard, really hard. Like all distributed systems.

CRDTs are far from a panacea:

* No implementation is compatible with another

* There is very few implementation in languages that are not JS.

* The biggest problem is when 2 documents are diverging for too much time (think a blockchain fork) like if you go offline for 4 days while your coworker work on the same document as you do. You have to come back to differential sync, like Git.

And it's just for sync, then to move to P2P you have to handle NAT traversals, distributed identities, rendez-vous places and much more.


For a lot of people for a lot of applications and tools, this level of sync isn't all that necessary though. If you are working on something yourself and not as part of a team then the age-old save-document-and-reopen-elsewhere method could work as well as it ever did. You don't have to have it open on your phone, laptop, desktop, and fire stick all at the same time with immediate sync. Have your offline documents on each machine, automatically back them up to "the cloud". If you edit one in two places because of disconnection issues then do a manual merge like we did in the old days (hopefully it won't happen often enough to be that much of a pain). In version 2+ of your product, once the base features are proven viable (and for commercial use, profitable), start to worry about automatic sync/merge/other with a granularity smaller than a whole document.

A lot of offline-first efforts are trying to solve every possible use case from day one, from single person with a laptop to huge international teams all working on the same asset at the same time and everything in between & beside, and in that direction madness lies. Implementing a useful, viable, relatively minimal product, and expanding its capabilities later would make more sense, though of course that suggestion is immediately countered by fears that the next clone of the idea will implement something first and be crowned king leading the thought process back to "must do it all, must do it all now".


You speak my pain... took so much pain to overcome these problems when delivering TerminusDB... taking best ideas from blockchain, git and rolling them all together... this white paper we wrote might remind you of past suffering... https://github.com/terminusdb/terminusdb-server/blob/dev/doc...


This is kind of funny because of course everything before about 2001 was “local first”. There have been multiple generations of patterns for connecting a local app to a network for collaboration, so there is a lot of history to check out. Ray Ozzie’s Groove was the closest thing I can remember to this: a local object store with P2P and central-store connectivity, based on a common synchronization algorithm. CRDTs have come a long way so that may be an important new angle.


This is a big one for me. But I think there is a missing level here. Local as in area, not just local to the machine.

Municipal and Neighborhood level networks would solve a lot of problems inherent in the broader internet, as well as making solving problems as discussed by the OP easier.

For example, it's a lot easier to trust your neighbor than someone on the internet (and better too). For one you can look them in the eyes and shake their hand, for another you are both governed by the same court (though this does make privacy more locally relevant). On the flip-side this means local people don't necessarily have to be great at the internet themselves to use these complex hosted programs. It's just another website rather than setting up a google docs server or using a git repo.

Continuing, another problem it solves is local discoverability. There are a thousand products out there that do local services, from craigslist, on down to "local wikis". But these services don't often care about your community, and it can be hard to know what other people are using (separating local efforts across a global network). Putting them all on a local network, with a local index (and search engines) allows better discoverability nearby (especially if portals into neighboring communities can be connected, eventually perhaps into a federation of sorts), but also services that are oriented to local needs (and issues and regulation).

I'm not saying this would replace the internet. Obviously not. But it would provide a better, safer, and more useful "local layer" of the internet to go to first (to say nothing of the social media security and child safety aspects). That also happened to be faster and more resilient (doesn't go down when the internet does). The problem of course is shit ISPs, though this is why the MeshNet people do what they do. And having applications and a community ready for it. The technology (and culture) would also be useful for communities that can't participate in the global network, from isolated communities (either due to poverty, authoritarinism, or remoteness) to future ones (spaceships, remote colonies).

Just a thought.


Local seems like an early internet drawing board idea that didn't take off and got forgotten, it practically sounds like something you could gaslight quite a few people into developing fake memories with a few consistent looking artifacts from the BBS era. In fact much if it is a reinvention of BBS essentially as it had soft localness due to long distance charges for phone calls for modem connections. Mail readers were an option to minimize BBS connection time.

Neighbor trust wouldn't last long if it even existed in the first place as soon as easy targets getting hacked became a thing. Most people leave their routers secured by default and even ISPs who use them to distribute wifi "publically" are supposed to leave them secure indicating the preferred level of local network interaction is generally "none".

The locality is a worst of both worlds, arbitrarily limiting discovery if they want to see and be seen and not providing any security as anyone could bridge access within or likely spoof an identity. It seems like something which would rapidly wind up redudant for all terrestrial applications even if it existed earlier.

Distant ones I suspect would either tolerate the higher latency or maintain mirrors or caches - which would likely wind up a subset of the content which doesn't depend upon being current and would probaby become a format which doesn't mind the "necroposting" and delayed reactions as you get hours or days later space responses.


> Neighbor trust

You misunderstood this. My trust of neighbors is more social and less technical. I am less likely to run into bots (and predators) on social media because local administration means those people must be within the jurisdiction of local law enforcement.

> The locality is a worst of both worlds, arbitrarily limiting discovery if they want to see and be seen and not providing any security as anyone could bridge access within or likely spoof an identity.

Part of the point would be to disallow this bridging. If you wanna be seen on both the internet and locally, that would be easy to do. But bridging outsiders into the local network means taking responsibility for them and their actions. And no one would want to do that unless it's people they know.


Reminds me of when broadband was rolled out in Romania in the early millennium, and the local ISP would give you details for a server where you could share films and music with people from the same city.


I’ve tried to follow this approach with my own app[0]. An account is optional, and a subscription plan is available to sync your data across devices.

There have definitely been some challenges with this approach, from available tech stacks to customer support, And user expectations. There were also some unexpected upsides, like requiring less server resources. All in all I think it was worth it and I hope this trend continues.

[0] https://mochi.cards/


TerminusDB and Hub (co-founder here) is offline first open source data collaboration software. We built the service so you can work offline for as long as you want and then resync when you're online again. We always felt this was the most important aspect of collaboration - you don't want to just update a common live view like google sheets, you want to go away make mistakes, fix them and merge when ready. We are also in-memory, so we put a lot of effort into compression, which worked out great as we can use your regular machine for the compute. Download the analytic engine, link to the hub and you can share data free and easy. Obviously talking my book, but a great backend for lots of this sort of collaborative local-first software (we're a DBMS at the end of the day). Will be rolling out p2p soon I hope - so going from GitHub for data to napster for data! #LivingTheDream https://terminusdb.com/hub/


TeminusDB looks quite interesting. The comparison that comes to mind though is Day/Hypercore, rather than any of the solutions you have listed.


Agree that the solution on the site need to be changed! Don't know Day/Hypercore - do you have a link? I couldn't find (or at least I couldn't understand what I found!)


> Agree that the solution on the site need to be changed! Don't know Day/Hypercore - do you have a link? I couldn't find (or at least I couldn't understand what I found!)

My apologies, that was an autocorrect-fueled typo, it should have been Dat, not Day.


p2p sounds interesting! Do you have more information about how the data synchronization model works? Is it practical to ship TerminusDB along with your local software? Do you think it would be possible to run using just web technologies? (webasm, IndexedDB, ..)


We currently have merge using rebase, with optional "query fixup" to reconcile changes. We are currently experimenting with various automatic fixup strategies for different use cases. We also intend to have a full git-like merge in the near future. Yes, very practical to ship with local software - it is designed for that and I think it would be possible to run with just web tech.


We should at least have local software for home automation. It should not be necessary to go out to "the cloud" to set the thermostat or turn on the lights.


Plug for my open source "local-first" alternative to Google Translate (https://github.com/argosopentech/argos-translate). It has a Google Translate like GUI and lets you install packages to support translating between different languages.

It currently doesn't have the ability to go to web services for translations you don't have installed locally but that's a feature I'm considering adding at some point. It would seem like the best way to do this we be to track (locally) which translations you use frequently and then save them locally (a one way translation package between two languages is ~100MB). This would give you the privacy/offline access of translating locally for language pairs you frequently use, while also seamlessly supporting the large number of languages supported by cloud services. You could also give power users much more fine-grained control over which translations they keep saved locally.


This overlaps with ideas like store-and-forward, delay-tolerant networking, and the Interplanetary Internet.

I've been writing local-first digital archives software. Our system uses git to store collection metadata and git-annex to manage binaries. It was originally designed to be used to allow partner sites with very low connectivity to contribute archival-quality video to a distributed archive. Data can be synced over the network or via physical storage sent in the mail.

Using Git/git-annex for this system has advantages for archival preservation. Copies can be located in geographically diverse locations and synced as necessary. The Git log's use of cryptographic hashes also makes data tamper-evident.


I have a vision of computing that I call "personal application omnipresence" (PAO) [1] where applications are singular instances that you interact with concurrently from any device. Notionally, it's similar to having multiple "views" in a MVC architecture, and allowing each of those views to adapt to the capabilities of the device displaying the view.

The article linked here talks at length about concurrent edits and the data structures necessary to handle work by multiple application instances. I feel this is (potentially) unnecessary effort. Rather than build applications to support concurrent edits by fat clients, use thin clients that all speak to the same instance of the application with its singular state. Concurrent work would therefore be resolved in real-time by a single application instance in a first-come-first-served basis, without any complex data structures or clever coding.

The root of so much complexity in modern computing is that all of us have multiple application instances when virtually nobody actually wants this. I don't want a separate email client on every device. I want a single email client that I can attach views to and interact with from anywhere.

[1] https://tiamat.tsotech.com/pao


The photo storage/management would be a typical use case for Local-First Software. Cloud storage for photo aims at consumer market is costly, and we've seen Shoebox, Canon Irista shutdown the services in the past few years. And from engineering's perspective, the data should store as close to the user and centric management of those photos is very low efficient too. However people like the cloud experience, the local first alternative should provide the same convenient. The SBC and smart phones are getting more and more powerful and cheap, it would be possible to have a local first software suites running on SBC and provide the same experience as cloud, and we can build the private cloud based on p2p technologies like ipfs, imaging your can have your private cloud service shared with family members/friends across different places running on a 24x7 SBC. DWeb need a killer application to compete with cloud, photo storage/management has the potential. We've been working on https://lomorage.com/faq/ as side projects, it's still far away from the vision but it provides a workable solution and we keep improving it over the years.


Echoing jlongster's thoughts, I have experiences this first hand working at Tact.ai (https://tact.ai) in the sales domain, which forces us (well, at least pre-covid) to think local-first if you want to succeed.

Most salespeople are always on the go, travelling through areas of bad network, or in flights (which many salespeople consider wasted time because their tools all need a constant internet connection, and that's one of the domains where "time is money" is a core tenet). So when we started building, "local-first" was a strong differentiator for us and became a core philosophy for the company. For the app we have, 95% of functionality works perfectly well offine (except some features that need third party API interaction that we've not found a clean way to set up so far).

Sure, it's so much easier to write a webapp and then add a webview on top of it and call it a "mobile app", but the effort (syncing data which can be edited in multiple places in an local-first world is a crazy engineering challenge) that goes into building a local-first app really pays dividends in the longer run.


I can't like this post enough. It's surprising how much we've let the ball drop in UX. Is it easy to address those points? Nope. Is it feasible? Absolutely! Back in 2013, we had build a network file system that included a ton of client-side optimizations [1] that gave almost local filesystem performance, both in metadata-heavy (e.g. building the linux kernel) and data-heavy workloads (e.g. video streaming). I'm sure with the lego pieces we have today (e.g. service workers, wasm, graphql), we can do just as good.

[1] https://www.businesswire.com/news/home/20130823005118/en/Mag...


A lot of the philosophy is similar to the Solid (Social Linked Data) project that Tim Berners-Lee started.

https://en.wikipedia.org/wiki/Solid_(web_decentralization_pr...


I think there could be an opportunity here for a company like Dropbox to provide an API that will allow local-first apps more easily do collaboration.

You can easily just drop a file in Dropbox to move it from place to place, but its hard to allow multiple apps to modify that file in real time.


Dropbox itself used to offer APIs to support such use cases. "Local-first" software wasn't called that yet, but that's definitely the kind of applications they intended to enable. Alas it was too early and these features got sunset back in 2015...

https://dropbox.tech/developers/deprecating-the-sync-and-dat...


I feel like CouchDB doesn't come off as good in that article as it actually is:

- It's fast, especially if you run it on a single node with proper hardware.

- It's well suited for multi-device apps. You even get a changes feed for live replication.

- I'd rather bet my money on CouchDB (Apache Project with significant support from IBM) being around in 20 years still than on Dropbox.

Privacy and user control is OFC up to the app developer. Also, it's no worse of a CRDT than the proposed structure, you just need to design the document structure well: In the given example, each todo should be a different document and voilá - you'll only get conflicts if the exact same entry is modified.


> How would an offline-first Internet look like?

1Password comes to mind. If you sign up for the subscription, you can easily access it offline, sync to the cloud now or later. Enable 2factor, and you can still access the vault offline, want to sync with the cloud? Enter the 2factor ID and you're in business.

> Your Work Is Not Trapped on One Device

Microsoft Office does a great job with this and office 365. You have the powerful Word running locally, saving both on disk and virtually in the cloud. But then if you switch machines, want to use in the browser? Easy no problem

> Seamless collaboration

Doesn't Git solve this problem?


Concerning O365, is it only me who thinks they have invested exactly zero to make Word Online actually display a document that has even the slightest complicated formatting correctly? Documents that are displayed horribly in Word Online actually work quite well in Nextcloud/Collabora online.


>> Seamless collaboration

> Doesn't Git solve this problem?

The author mentions that:

"The collaboration approach that I personally like the most (and the one I feel could be embedded in “local-first applications”) is the one used by git."


Local first software is more annoying and requires more sophisticated thick clients to develop for, especially as more of the world become connected, with even airplanes being connected and soon, the middle of nowhere with starlink.

So it's more expensive to write local only software as a result, you will probably deliver features more slowly than your cloud-first counterparts and thus you'll eventually be out competed unless your leveraging local first to deliver something that your cloud first counterparts can't do well, like speed & performance.


I think this is one reason service workers haven't caught on. Offline-first is nice, but it has an obvious shelf life given the progression of technology.


This is very interesting introduction into CRDTs. I have recently been learning about this topic and found some other useful blog posts and articles about it. Here is a list of them: https://knowledgepicker.com/t/226/conflict-free-replicated-d... - if you know some more good ones, feel free to add them there, it's open for contributions!


I’m currently working on a project for a local-first SaaS company

They have perpetual licensing (optionally you can subscribe monthly if you don’t want to pay everything upfront) and it really does put power in the user’s hands

It’s fast


Good read with similar ideas and same name https://www.inkandswitch.com/local-first.html


This paper was referenced in the article.


I run a local instance of the web platform I developed (for wikis, publishing, etc) which is called https://quanta.wiki. You have to configure a docker file, and install docker to run it, so it's not a 'product' others would easily run this way, but I'm sharing this platform here with you guys because it has a lot of the same goals and designs.


I highly suggest looking into hypercore [1] stack and Beaker Browser [2] in particular.

The decentralization is absolutely possible nowadays, but there are few incentives to push it forward, unfortunately.

[1]: https://hypercore-protocol.org/

[2]: https://beakerbrowser.com/


We've built a local first social media app, planetary.social, it's in testflight, but go to bit.ly/planetarytesting to join.


In theory I agree this is a good idea. In practice the type of application I'd want to build local first would involve users downloading 1GB of data before they get to do anything. This will never work on smartphones.

I also remember using an application that did this very poorly. It loaded all data into RAM causing it to consume 1.5GB for very basic functionality.


> In practice the type of application I'd want to build local first would involve users downloading 1GB of data before they get to do anything.

Do you have an example?


People couldn't sell adverts?

Working on PouchDB I spent a good amount of time thinking about how to "sell" local / offline first apps. It seemed a lot to me like the problem was plain old capitalism, offline / local first software is faster / and more robust, for most circumstances its a better decision for the user. However the economic incentives arent there to build the best software, the economic incentives are to slow webpages to a crawl with adverts and planned obsolescence.


Slowing to a crawl is counterproductive greed essentially as it repels the user base and doesn't aid sales. Google won out with text adds as opposed to "punch the monkey" autoplay banner crap.

Aside from that there is the issue of portability which comes with the power - namely supporting everything and barrier to entry. But the bigger difference is probably convenience. Web typically needs no install and less commitment and gives a single set of servers to work on for everybody.

Take a general referral/word of mouth. Web has more or less "go to the URL and try it with no commmitments or extra steps" and web shopping found not needing to register was crucial for people actually trying. Now compare local first "Hey you should try and download Notepad++/git/etc." And that is before IT policies or licenses get involved which can mean a certain segment get a hard block they wouldn't for a web service unless they specifically decided that say "pastebin is a hard no" and then missed that the new hotness is say clipboard.com anyway.

We had an approach which tried for its own best of both worlds with its own sandbox and that was in browser Java applets.


I dont consider the web and offline first / local software to be mutually exlusive. PouchDB is a database built on web technology in order to enable offline capabilities to web sites / apps, Service workers handle caching static assets reasonably easily.


Thanks for maintaining PouchDB, it's an amazing library!

But true, if our customers wouldn't pay for our app and we'd have had to resort to ads instead, offline first wouldn't have been been a priority, perhaps not an option at all.


This is absolute must for any kind of computing activity that's not on Earth or in Earth orbit.


What if local-first software/apps leveraged products like syncthing (on some layer in the back-end) to manage the offline data synch between the same software/app on other devices?


What does this look like for email? Do we have to trust a particular email account for everything?


Ummm, IMAP? Reading the article, this was my first thought: email applications like IMAP clients that work offline are the perfect example of this philosophy.

If you can't access your mail provider, you still have all the data from the last time you did, you can search old emails because they're stored locally (you can set how much you want to keep for these situations), and you can write an email and it will be stored in the outbox to be sent when connectivity is restored. In other words, your email experience is exactly the same as with connectivity, except for a banner that reminds you that it hasn't been able to download new mail since HH:mm.


I remember Microsoft pushing hybrid apps called "Smart Clients" back in 2003.


We had local-first software, long ago with Lotus Notes / Domino.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: