9 June 2020
So far this week we’re at 16 active users (up from 14 last week). I tried posting it on Hacker News (and reposting it with mod permission), but it was a dud both times. I have to say, it’s so much easier to get views for Clojure articles I write. I haven’t identified a niche specifically for Findka that I can market to effectively, so I’ll probably just continue writing Clojure articles and hosting them on findka.com. Seems to work well enough.
I also moved the Biff docs to findka.com/biff and added a “Sponsored by Findka” thing on the sidebar, which I guess is true depending on your definition of “sponsored” ha ha. Of all the money I’ve been paid to work on Biff, all of it has come from Findka (vacuously, though I have pre-emptively signed up for Github sponsors).
I was thinking a lot about various methods to promote Findka, but I’ve decided to just focus on writing articles for now because (1) it works, and (2) I enjoy doing it. I hadn’t been paying much attention to the latter point, and I think that was a mistake. My mental model was basically that you have fun building the thing first and then you have to do un-fun things to get people to use it. However, there’s power in structuring things around your strengths and interests, even though I didn’t make that connection with regard to marketing at first.
I have some exciting news. Last week I experimented with Materialize.io, and I’ve confirmed that it can be integrated with Biff without too much trouble. This means you’ll be able to subscribe to arbitrary SQL queries (Biff’s current subscribable queries, like Firebase’s, don’t allow joins). It’ll work like this:
You specify which Crux documents/Biff tables you want to be continuously imported into Materialize (i.e. Postgres basically).
Using those imported tables as sources, you write SQL queries, the output of which will be kept up-to-date efficiently by Materialize—and the queries can be as complex as you like.
The query outputs will be exported back into Clojure data structures with their own Biff tables which you can then subscribe to using Biff’s existing subscription method.
As an example of what you could do with this, check out the stats section in Findka’s sidebar (it won’t be visible if you’re on mobile). Write now I recompute those stats every 5 minutes from scratch. With Materialize, those stats could be incrementally updated whenever new data comes in. So all I’d have to do is write a few SQL queries, and then the results will stay up-to-date even at scale.
Now, at Findka’s current scale, there’s no need for Materialize. I could change the stats to be recomputed from scratch after every document write and it would still be plenty fast right now. On top of that, it’ll take more work for the Materialize integration to actually be useful at scale. For one thing, Biff itself only works on a single machine currently, and I’ll be using Materialize’s somewhat-janky CSV import feature instead of going through Kafka.
But I’m still excited about integrating with Materialize at this stage because it has a path to scalability. People can start building applications using the integration, and as I continue to work on Biff, the queries will be able to scale without users restructuring their applications.
Plus, Materialize is just plain cool.