For the past few days I've been noodling with an idea for sharing data between applications which I'm calling "JSON Event Log", or JELO for short. Since the idea isn't in a finished form, I'll just share the thought progression which led to it.
Problem: I want to buy ads in newsletters to promote my startup. The first step is finding potential newsletters to sponsor (sourcing). This takes some legwork but is doable. Besides looking around yourself, there are a couple directories like SwapStack which list newsletters that are looking for sponsors.
However, sourcing isn't the hardest part. The real problem is predicting which newsletters will give a good return on investment. It's standard for publishers to say how many subscribers they have and what their open rate is, but it's much less common to say how many clicks previous ads got—and there's a lot of variance in that department.
I've complained about this previously. It's been on my mind again recently because I'm thinking about getting back into sponsoring newsletters. So, I've been thinking more about potential solutions.
I break the problem down into two parts. First, you need a bunch of raw data. For example, "Alice says she bought a classified ad in Bob's newsletter for $50 and got 30 clicks and 15 newsletter signups from it." Second, you need to aggregate that data so that you can make a decision—i.e. "based on all this raw data, I think I should buy an ad in Carol's newsletter next." Obviously there are tons of problems that fit into this kind of structure. Take Google, for instance: they collect a bunch of raw data about websites, and then they aggregate it so that given a search query, they can pick which websites to show you.
However, if you have one big service that does both parts—collecting raw data and aggregating it—it's vulnerable to people who have an incentive to game the aggregation. In Google's case, that means SEO spam. For newsletter ad stuff, this is why it wouldn't be as simple as someone making a newsletter ad marketplace and providing search and recommendations based on individual newsletters' performance. As soon as it got a meaningful amount of traction (if not sooner), it would be overrun.
I think the solution is to separate raw data collection from aggregation. If you've got a bunch of raw data out in the open, then it's easy for lots of different aggregation services to pop up, which shifts power away from spammers. With one aggregator and many spammers, it's just too hard for the aggregator to keep up. But with many aggregators, spammers have to do more work—and the more successfully they game a particular aggregator, the more users will shift to that aggregators' competitors.
Bringing it back to the problem at hand, I've been toying with the idea of starting some kind of network for newsletter ad performance information. A really easy way to get started would be to use Google Docs. I would create a doc for myself, and then every time I buy a newsletter ad, I'd update the doc with the results. I'd share the doc with other newsletter sponsorers and encourage them to use the same template for their own doc. We'd add a section at the end of our docs where we'd link to each other. These links wouldn't necessarily be endorsements; they'd just be acknowledgments of existence.
Each doc would also have a section for authentication. In my case, I might say "I run The Sample. As proof, you can see on the landing page that there's a link to @the_sample_umm on Twitter, so that account is run by The Sample. And here is a link to a tweet from @the_sample_umm which links back to this document and says 'this is a doc I made.'"
Then, if you want to buy a newsletter ad, you've got a bunch of raw data that's fairly convenient to access. The aggregation part is left to you as an individual. You can follow the links between docs, checking authentication along the way. Once you have a bunch of docs bookmarked, you can decide which ones you trust (e.g. maybe in some cases you know the author) and then look to see which newsletters they've had the most success with. Of course you'll also need to consider newsletter/ad topics, since every newsletter audience will have different interests.
This would probably be good enough to support individual, manual aggregation. The main downside is that while it's easy for humans to read a bunch of Google docs, it's not easy to write programs that do it. Ideally you have all the raw data in a format that's easy for machines and not just humans to understand. Then you can create interfaces that are designed specifically for newsletter sponsorship perusal, and you can more easily create aggregators like search engines and recommender systems.
And crucially, if this machine-readable raw data is public, the barriers to entry for making these aggregators is small. Usually, when building an aggregator, collecting the data is 90%+ of the work. Smaller barriers to entry means more aggregators and more spam-resistance.
So in what format do we publish this newsletter ad performance data? That's where JSON Event Log comes in. Since I've already written a bunch of paragraphs I have little desire to go through all the details of it, but I'm envisioning something like RSS or JSON Feed. The main difference is that RSS et. al. are meant to model events of the form "I just published a blog post/podcast episode." I want a similarly simple feed format that's designed for any kind of event, like "I just got these newsletter ad results" or "here's a song I like" or "this product sucks."
Maybe I'll write more about JELO next week.
I think the biggest problem with a lot of open web efforts is that the people who are most active in them tend to be idealists who like decentralization a lot but don't have an accurate model for when centralization is better. Any attempt to make the web fully decentralized is IMO doomed to fail due to simple evolutionary reasons (i.e. it'll get out-competed by things which aren't idealogically opposed to centralization).
The real question is not "how can we make everything decentralized" but "which things should we decentralize." The web started out decentralized, and now the pendulum has swung further to the "centralization" side—how do we get it in the middle?
Do you know anyone in this space?
LinkLonk, an RSS feed recommender. It was on Hacker News the other day. (After seeing how many links in curated newsletters are just stuff from the HN front page, I always feel a need to give this disclaimer).
Backchannel, another paper from Ink & Switch. I only skimmed this one. Conceptually seems kind of interesting 🤷♂️.
Bring back Web1, a critique of web3's focus on money.