JSON-LD

Everyone hates JSON-LD, and with good reason. If you only encounter it briefly, it doesn’t seem all that bad. You just have a weird @context element in your JSON, and you can sorta kinda see what it’s for. Just an irritation. It’s people who really get into the details who quickly learn to loathe it. It’s got more than enough syntactic sugar to give anyone diabetes.

That’s really the purpose and the problem with JSON-LD. Its only job is to make the bitter pill of RDF sweet enough to be swallowable. This hides what RDF is for, and what you can do with it. It does this so effectively, that most developers get far into the weeds of implementation without ever getting a basic primer on what RDF is for.

So I guess I’m going to try to provide that here.

Struct

When wrapping your head around RDF, one big problem is that it violates one of the basic assumptions of computer programming, that data is kept in structures. Take the all-time classic useless example C code.

struct Employee {
  string name;
  int grade;
  Employee* manager;
};

There’s a lot of flexibility in that data structure. The string is its own data structure, and it may contain any number of characters. The manager could be NULL. Nevertheless, we have rules we can rely on. This employee might or might not have a manager, but we will always know whether they do or do not. We will always know how to reliably find that information in O(1) time.

Large software packages quickly grow hundreds of these things. I find that defining my data structures is usually the first step of prototyping a brand new feature. That’s because structures are how I think. Until I see the structure defined as code, I’m not really thinking through the details of what needs to happen and what can go wrong.

This makes it very hard to think about data which does not fit the mould of a struct. Like RDF.

Triples

An RDF document is not a definitive description of anything. It is a list of statements about stuff. Each statement consists of a subject, a predicate, and an object. (“Object” here is in the linguistic sense, nothing to do with object-oriented programming.) One basic statement in RDF might be “dog”, “bites”, “man”. Or, more practically: “Mount Kilimanjaro”, “is in country”, “Tanzania”. Every statement exactly three elements. So, it’s called a triple.

But, no doubt someone decided to name a town “Mount Kilimanjaro, New Jersey” or something. So to be sure we’re talking about the right subject, we refer to those subjects by a URL. (OK, not a URL, an IRI. Whatever. No-one cares what that means.) Every thing in the whole world gets a unique URL. We can keep the useful English name, of course, by just recording a triple that “https://www.wikidata.org/wiki/Q7296” “has name” “Mount Kilimanjaro” or something like that. Importantly, that one town in New Jersey can have a name with the exact same sequence of letters as the mountain, without treading on anyone’s toes.

Reification

So now we can go crazy building machine-readable statements into a database. But before long, we bump into statements such as “Joe Biden”, “was the winner of”, “USA Presidential Election 2020”. Ugh. You know what’s coming next.

Many things are definitely obviously true to all human beings with the mental capacity required to put on a pair of socks. But we have to acknowledge here, there is a significant minority of sock-wearers who don’t agree with our putative triple here, and they’re gonna tell us all about it unless we make room for them. We need to grit our teeth and somehow tag this statement as being in some sense “disputed”.

Now we’re getting meta. We’re making statements about statements. And we like writing down our statements in RDF. We want to say “‘Joe Biden was the winner of USA Presidential Election 2020′”, “is disputed by”, “[insert disingenuous corrupt bastard here]”.

RDF can handle that! Statements about subjects are also things, which is to say they are subjects themselves. Just give it a URL, and make your triples:

SubjectPredicateObject
https://en.wikipedia.org/wiki/2020_United_States_presidential_election#Resulthas subjectJoe Biden
https://en.wikipedia.org/wiki/2020_United_States_presidential_election#Resulthas predicatewas the winnner of
https://en.wikipedia.org/wiki/2020_United_States_presidential_election#Resulthas objectUSA Presidential Election 2020
https://en.wikipedia.org/wiki/2020_United_States_presidential_election#Resultwas disputed byhttps://x.com/profile/slimy_creep_88

This process of turning a triple into a subject is called “reification”, but I fully intend to never understand enough Heidegger to know why. It just is.

So wait… does that mean you can make statements about those statements too? Sure does! Woah. Dude. Inception.

Uncertainty

If we go back to that silly employee example, we could represent the same information:

SubjectPredicateObject
https://gov.us/employee/1234has nameDonald Trump
https://gov.us/employee/1234has grade1
https://gov.us/employee/1234has managerhttps://cia.gov.us/asset/jeff_e

But now, there’s no guarantee all three rows will be present. And there’s no guarantee that there will only be three rows. Furthermore, some of those rows might be reified and tagged as being questionable.

If you are a normal computer programmer, all of those properties should strike you as deeply upsetting. No-one would ever set out to write software based on such a flimsy foundation.

But this is a good way of describing reality. In reality, we are sure that we know some things, we’re unsure if what we know about other things is right or wrong, and there are some facts about things that we are sure we don’t know at all. We are, however, sure that we will never know everything. None of that is a good fit for a struct. But it’s all an excellent fit for RDF.

Federation

I believe RDF is very well suited for what should be the core purpose of the Fediverse: connecting existing communities, warts and all.

I frequently see discussions on how we can “grow the Fediverse”. Implicit in this is persuading people to leave behind their old communities on centralised platforms and use something like Mastodon instead. I think this is a mental habit carried over from Silicon Valley VC culture: “growth first, profit later”. It’s unhealthy.

Instead the goal I see, that I believe ActivityPub is uniquely well-designed for, is connecting communities without requiring anyone to move anywhere. For all the complexity of ActivityPub, including its message authentication, grafted-on tubercles like webfinger, and of course the madness of JSON-LD… it’s actually pretty lightweight to bolt on to an existing system.

Schemas

The basic architecture of web applications hasn’t changed much over the last few decades. At the core you probably have a database, or some other persistent data store. Classically the database is SQL, so you have tables, each of which has fields. Happily, rows in tables can be intuitively stored in structs in your code. And any number of frameworks will glue the two together with generated code that you skip over when reading stack traces.

When you build your own centralised application, you talk to your users to figure out how best to describe their world. You decide on your own schema. You code that up in a way that forces a rigid, eternal structure onto every element. “What do you mean the CEO doesn’t have a manager? Argh. OK, well what if we make this field nullable…” You make it through a few releases, change the schema when you must, and eventually settle on something solid.

Hacking it

So then what if we lose our minds and decide that our internal HR system needs to connect to the fediverse? Well, we keep our nice internal structure. And for the rest of the world, we hack it. Employees are actors, sorta. Promotions are activities, sorta. The URL to our intranet portal that has the employee number at the end is as good an ID as any. And it works, with broadly acceptable results, some of the time. That’s as much quality and reliability as social media has ever achieved, so it’s fine.

It is in this context that you have to understand just how loose ActitityPub has to be:

  • Fields might not be present
  • Extra fields you never heard of might be present
  • Those extra fields might have the same name as completely unrelated fields from someone else
  • The fields you have heard of might have a different meaning from how everyone else uses them

What I’m saying here is: if you get angry at developers sending ActivityPub to you that are messed up like this, you are cutting off the most important source of the growth of the Fediverse. You are sabotaging the enterprise.

It is not necessary. You can stop, breathe, drink a herbal infusion, and invest a moderate effort in interpreting what you have. No more than that. If you miss a few messages, it’s not important. It is not nearly as important as giving everyone in the world the opportunity to talk to each other, if and only if they want to.

Rules

The funny thing is, most of what I’m talking about here applies just as well to JSON. But JSON looks juuuust enough like a bunch of nested structs that it plugs straight into the struct receptors in programmers’ brains.

This causes trouble. Sooner or later JSON arrives that doesn’t have all of the expected fields. Exceptions get thrown and weekends get ruined. And in the aftermath, frustrated programmers demand to know why there are no rules in JSON. With the inevitable result: JSON Schema.

When I first learned about JSON Schema, I hit the roof. If you wanted structured data with well-specified integrity checks and formal schema definitions, we had that already. It was called XML. But no, people didn’t want any of those things. “I just want to blast my structs into a text file.” JSON exists precisely because it was simple and loosely-defined. Adding a formal schema to that is an insult.

This is an anti-pattern, perhaps the single anti-pattern of the modern tech industry. One programmer operating alone in their 20% time has a problem and says, that can’t be so hard to solve. Just be pragmatic and get a MVP out the door. And then a few years later it turns out that the real world is more complicated than one programmer can imagine. At that point the reaction kicks in, and programmers start insisting on rigid technical standards. Soon it becomes the world’s fault that the standard is inadequate, and the world’s responsibility to adapt to that standard.

With the Fediverse, we have a real chance to avoid that anti-pattern. In my opinion the standard we have strikes exactly the right balance. Some things are pretty well-defined in the core. It’s enough to produce something useful. It also leaves huge gaps. Some of those gaps really do need to be filled in order to build something better than just somewhat useful, and FEPs are a good system for filling those gaps. But some of those gaps are structural, like the thermal expansion gaps built into bridges. Welding those gaps shut would cause the system to rip itself apart.

Being Dumb

It is occasionally claimed that although the details of JSON-LD are quite complicated, you can “just treat it like JSON”. This is technically true. I’m typing this on the second floor, and I could go through the rigmarole of taking the elevator down. Or, I could just jump. That second option is completely valid.

People do write code as if AP is just JSON with a weird @context boilerplate element. These people sooner or later will discover that this dumb approach doesn’t work. It might work for the simple majority of messages. But sooner or later they will find messages that don’t use inReplyTo but rather as:inReplyTo, and threading is broken for those messages. They will complain about the upstream implementation, only to be told that it works fine with other servers. Hours and hours of digging later, they discover that their own code is technically wrong. And for anyone who cares deeply about the code they’re writing, discovering that your code is wrong is deeply hurtful.

Intelligent observers will note the problem here: the developer did not properly read the specifications. Empathetic observers will identify a different root cause: the specifications were counter-intuitive and over-engineered.

I would identify a third, even deeper root cause here: the desire for correctness. I would say on the contrary, the original implementation was largely good enough. You should not expect that every message will be delivered perfectly. For a start, even if you yourself read the specification correctly, many other developers will not, so you will receive malformed messages anyway.

But beyond accidents, I am arguing that it is fundamental to the nature of federated social media that you can never achieve even a minimally-acceptable level of correctness, not with the best will in the world. Rather, the right way to wade into the wild world of social media is with a defensive attitude. The actors you will be exposed to vary from the painfully naïve to the actively malicious. Some of them will certainly use the wrong inReplyTo, or not include it at all. You need to write software that will survive relatively unbroken in those circumstances. Which means, if on certain articles their perfectly valid as:inReplyTo gets discarded because it is not recognised, your software should behave adequately anyway.

Perfectionism

The big problem here is the seriousness with which the community treats standards. At one point I saw some well-intentioned enthusiast asking the world, which AP software has the “purest” implementation? The implicit contrast is to Mastodon, which has always received torrents of abuse for its crimes against standards – often expressed as conspiracy theories. I appreciate the intent. Standards conformance is something to aim for. But if we want the Fediverse to reach its potential, and be something that our normie friends and family use, we cannot allow purity tests to become established.

So this is the message: yes you can treat JSON-LD as “just JSON”. If you do this, your code will be broken. Broken is fine! I fully encourage and support brokenness.

Moderation

But I want to return to triples. Remember that triples consist of a subject, a predicate, and an object. The key flexibility JSON-LD provides is that none of these require consensus in advance. This is obvious for subjects, and easily arguable for objects, but the argument for this feature of predicates requires a little elaboration.

On the Fediverse, I think the critical near-term use-case for JSON-LD is moderation. Especially composable moderation.

Status Quo

Today, the Fediverse mostly has local moderation – and it is failing badly. It is left to every instance to make their own decisions. But the scale of abusive material on the Fediverse is far too large for moderation teams to deal with effectively. They can generally react after the fact, but they are not efficient at blocking material before users see it and the damage is done. That kind of thing requires active cooperation.

And so inevitably, blocklists and #fediblock spring into existence. But these are nothing more than mob rule. There are no formal evaluation criteria, no straightforward appeals processes, and no sanctions against abuse. It works because the majority of instance admins are generally benevolent, and the few bad actors quite obvious. I’m old enough to remember when the whole Internet was like that. It was a long time ago.

The flaws with this approach are obvious enough that IFTAS was created. I have great respect for the work of IFTAS, I am glad that they exist, and sad that the Fediverse didn’t come through with the funding they require. But I also see them falling headfirst into the very obvious trap here. They recognised that not all instances want to ban every type of problematic content. So they started classifying. phishing, hate-speech, doxxing, a one-by-one list of the violations of civil discourse. All we have to do is agree among the decent servers that they will tag and broadcast their decisions to each other. And presto, the majority of the bad content can be blocked from your site before a single one of your users sees it.

But then the question arises: “who says?” Is it still #doxxing if the location of wanted criminals is revealed? What if the illegally-hidden identities of ICE agents are exposed? Vast amounts of valid journalism could be swept up in the more inclusive definitions of “doxxing”. Clearly, boundaries must be drawn and exceptions must be made. Not everyone will agree with those.

Composable Moderation

This is the motivation behind Bluesky’s widely and deservedly praised composable moderation system. It’s built into the architecture of AT that there are many independent Labellers. These ingest the firehose, make quick decisions which labels apply, and broadcast those decisions back out into the firehose. It’s no coincidence that AT got there first: the AT architecture is optimised to support a single global instant messaging stream, with everyone seeing everything immediately. There’s no way human moderation on individual instances can keep pace with that. But even though AT needs it more, the same solution should apply to the Fediverse. Instead of one perfect organisation centralising the classification schema, an array of independent actors can respond to threats as they emerge.

So in practice, we want moderation triples that look like this:

SubjectPredicateObject
https://x.com/furryantifa/p/1234labelhttps://iftas.org/doxxing
https://x.com/furryantifa/p/1234labelhttps://blacksky.org/ice_tracking

We are thus able to implement a more nuanced moderation policy. We don’t accept doxxing, and we generally do trust IFTAS’s judgement on that. But let’s say that hypothetically IFTAS made the unfortunate choice that exposing ICE agents counts as “doxxing”. We don’t want to block that important activity, but we don’t want to just allow all the vicious doxxing IFTAS blocks. Coming to our rescue is Blacksky. By explicitly labelling anti-ICE “doxxing” as ice_tracking, we can provide a category of exceptions that give us the best of both worlds.

But it gets worse! There’s an outbreak of people discussing people’s deadnames. This is considered a kind of doxxing. But (again hypothetically) the rigid IFTAS definition was created without that case in mind. So IFTAS refuses to tag those posts correctly. What to do? The solution now is provided by Northsky. They have created their own definition of doxxing that includes revealing deadnames. Only trouble is, they aren’t nearly as efficient as IFTAS about catching the general case. So really what you want now is something like really doxxing = ((iftas doxxing but not blacksky ICE tracking) or northsky doxxing). Phew!

SubjectPredicateObject
https://x.com/furryantifa/p/1234no labelhttps://iftas.org/doxxing
https://x.com/furryantifa/p/1234labelhttps://blacksky.org/ice_tracking
https://x.com/furryantifa/p/1234labelhttps://northsky.org/doxxing

Importantly, now there are two organisations with only partially overlapping definitions of the same tag. And we can’t just choose sides. Our increasingly complex algorithm requires both. This is why we always tag with a full URL that includes a controlling organisation. It’s not enough to use any short name like doxxing. That would quickly lead to conflict as to who has the “true” definition. By using a URL, both can exist side-by-side. There will still be sniping about “splitters” and “rigid bureaucrats”, but everyone can get what they need.

Predicates

That counts as an explanation of why subjects, the different #doxxing definitions, must be URLs. What about predicates? So far the controversy has been concentrated in the objects and subjects. It’s less intuitive why predicates themselves might be controversial.

For this I’ll bring up the fantasy project I was dreaming of when I first started investigating RDF. I wanted to build an open database of electronics products, with their features and compatibility. This camcorder has these output ports pumping out data in these formats, this television has another set of ports and supports more data formats. Can they talk to each other? Make it comprehensive enough, it could solve puzzles about what is the cheapest product that can connect this gizmo to that widget. At the time, that was a problem I had.

SubjectPredicateObject
https://amazon.com/g/B123456789has connectorHDMI
https://amazon.com/g/B123456789can output1024×768

Except, there’s a problem. It turns out that this device actually only has an output of 640×480. But it can output that as a 1024×768 image, crudely upscaled. It looks ugly as hell. Does it really output 1024×768? Cue the flame war.

The same dynamic applies. You disagree about the meaning of the term? Fork it! Perhaps Amazon insists that technically the output is the full resolution. But an alternative site has a more pessimistic evaluation.

SubjectPredicateObject
https://amazon.com/g/B123456789https://amazon.com/schema#can_output1024×768
https://amazon.com/g/B123456789https://ifixit.com/schema#can_output640×480

In the time I spent thinking about this problem, I realised that there was no possible way a small group of people could once and for all define the full schemas necessary to characterise the vast spectrum of consumer electronics. I’m confident that individual enthusiasts could easily outperform the quality of the listings on typical retail sites, not to mention correct the outright lies. But the fine details of the schema would have to be delegated to sub-sub-sub-domain experts. And those experts would inevitably get into conflicts that cannot possibly be amicably resolved.

RDF is the neatest solution to this problem. It allows competing factions to agree to disagree on the points that matter to them, while still being able to cooperate in those areas with broad consensus. The data flows.

Expert Systems

Remember when “AI” meant expert systems? I don’t. It was before my time. It was a technique suited to a period with vastly fewer computing resources. Which means that unlike LLMs, it’s efficient. To work well, it needs large, well-specified data sets. My electronics database idea would be the perfect application of expert systems. Deterministic, reliable, controllable. The way computers were always supposed to be.

To reason its way through the mess of different opinions, we would need to add more opinions. Something like, the true max resolution is the minimum of that reported by Amazon and iFixit. Or we might say that for our purposes, IFTAS’s doxxing is the same as Northsky’s doxxing.

SubjectPredicateObject
https://example.test/schema#max_resolutionhas upper boundhttps://amazon.com/schema#can_output
https://example.test/schema#max_resolutionhas upper boundhttps://ifixit.com/schema#can_output
https://northsky.org/doxxingis equivalent tohttps://iftas.org/doxxing

Does this start to look a little bit like we’re writing code in RDF to manipulate data in RDF? Why yes, yes it does. Does this hint at the rabbit hole I dove head-first into and got stuck in for 6 months, killing the project dead in the water? Why… yes.

That doesn’t make it a bad idea. It just serves as a warning of a road you should not venture too far down.

Humans

What I’m saying with all of the above is, if you want to understand JSON-LD, you should try to think in terms of triples, not structures.

JSON-LD itself is a rather weird beast. It’s kinda like a compression format where instead of trying to reduce the number of bytes of storage, it’s trying to reduce the cognitive load of importing that data into the brain of a human using only busybox-level command line tools. Which… seems like a pretty weird project when I type it out like that.

The simplest way to write JSON-LD is not to use @context at all. This is perfectly valid ActivityPub:

{
  "https://www.w3.org/ns/activitystreams#id": "https://friendica.misfits.fedi/objects/5c869366-4864-fe02-51e9-2cc050471877",
  "https://www.w3.org/ns/activitystreams#type": "Like",
  "https://www.w3.org/ns/activitystreams#actor": "https://friendica.misfits.fedi/profile/pizzazz",
  "https://www.w3.org/ns/activitystreams#published": "2023-09-10T17:52:17Z",
  "https://www.w3.org/ns/activitystreams#to": "https://www.w3.org/ns/activitystreams#Public"
  "https://www.w3.org/ns/activitystreams#object": "https://friendica.stingers.fedi/objects/4c279aa5-1364-fe01-95b0-398227271002"
}

But that’s very hard to read, with its long lines. That’s what @context is there to fix. It’s a way to strip out common strings while preserving the full flexibility of the original:

{
  @context": "https://www.w3.org/ns/activitystreams",
  "id": "https://friendica.misfits.fedi/objects/5c869366-4864-fe02-51e9-2cc050471877",
  "type": "Like",
  "actor": "https://friendica.misfits.fedi/profile/pizzazz",
  "published": "2023-09-10T17:52:17Z",
  "to": "Public"
  "object": "https://friendica.stingers.fedi/objects/4c279aa5-1364-fe01-95b0-398227271002"
}

But then you can go further. Wouldn’t it be nice if you could avoid fetching the liked object and just get it in the original notification?

{
  @context": "https://www.w3.org/ns/activitystreams",
  "id": "https://friendica.misfits.fedi/objects/5c869366-4864-fe02-51e9-2cc050471877",
  "type": "Like",
  "actor": "https://friendica.misfits.fedi/profile/pizzazz",
  "published": "2023-09-10T17:52:17Z",
  "to": "Public"
  "object": {
    "id": "https://friendica.stingers.fedi/objects/4c279aa5-1364-fe01-95b0-398227271002"
    "type": "Note",
    "published": "2023-09-10T14:49:27Z",
    "to": "Public"
    "content": "<p>LOL 😜</p>"
  }
}

That same data would be quite a long and unreadable set of triples. But that’s a human problem. For a computer, it would actually be simpler to process with everything spelled out. It would be more raw bytes of course. But if you wanted to minimise that, a normal general-purpose compression algorithm would provide far better results than JSON-LD. The only reason the messages we actually see look the way they do, is so that humans can pipe the output to jq and get something digestible.

The downside of this effort is that it makes these messages very much resemble the non-linked-data ordinary JSON used in REST APIs all over the Internet. There is no imperative to think about this data as triples. And so most developers instinctively see structures.

Conversion

JSON-LD is not difficult to interpret because of any intrinsic property of the data. The difficulty arises during the implicit purpose of interpreting this data. That purpose is importing data from one system into another, when those systems have entirely incompatible data schemas.

The typical way of handling such export-import processes is to convert a proprietary custom format into a standard export format, and then convert that standard export format back into a different proprietary custom format.

Superficially, that’s what ActivityPub appears to be. But it’s not, because in reality RDF doesn’t provide that kind of standardisation. It’s just a container format allowing you to export your proprietary custom format intact. However, ActivityPub (or rather ActivityStreams) provides a vocabulary you can use to convert a portion of your data into some externally-defined fields. You end up with a data blob that is only partially standardised. The receiver then has the job of interpreting what it can. Of course this is going to be difficult.

Most programmers are conditioned to expect the fully-standardised export format approach, and get in trouble when it turns out that ActivityPub is not that. And the cry goes out, why not just actually do that? The answer is that in the federated social media world, that standardisation effort will fail. If any agreement is reached, it will inevitably be too rigid to map the needs of the communities that try to use it.

Solutions

So how should an ActivityPub programmer implement JSON-LD?

First of all, programmers should train themselves to accept imperfection. The simplest thing to do is to treat ActivityPub as JSON, with a fixed set of fields, and require that @context is exactly what you expect. Make it work fine when talking to Mastodon, and make it robustly reject data that doesn’t fit that. Yes, I know this goes against every principle for which you stand, and the whole point of ActivityPub in the first place. That’s why this is the first lesson. You will never progress any further unless you are capable of comfortably accepting the compromises in this first step. Optionally, you can add a handful of critical alternatives pathways for talking to important systems that don’t conform exactly to Mastodon’s approach. In any case, you should leave a space in your code where those rules could be inserted.

The next step up is to use a library that is able to process JSON-LD into fixed structures that reflect the ActivityPub spec. Probably this library will handle other tasks such as signature verification as well. But this layer allows you to, in principle, communicate with any implementation conformant to every obscure spec detail. This gets you close to the standardised export format approach that programmers expect.

But this is not the end of the story. Such a library can only get you part of the way to interpreting the non-ActivityStreams data that flows over ActivityPub, such as the moderation decisions discussed above.

Architecture

To move beyond the basics of ActivityPub, you need to rethink the role of your software.

A typical website provides a data store with a fixed schema, a web front-end for visibility and reach, and an app talking via an API to provide an efficient and reactive experience. That’s several things. And it’s far from clear that particular uses (video site, dating site, gaming site) should all adopt this structure.

To me, the more obvious approach is to make an AP server that is a dumb data repository. Such an implementation would either store JSON-LD directly, or more radically the triples themselves in a triple store. It would thus be completely agnostic regarding data schema. No matter how obscure the custom data format, it would simply store it as-is, with a minimum of processing. That would leave all the complexity of interpreting data for custom purposes to the client. Most likely, one user would have several different custom apps talking to the same server, each tailored to a particular use-case.

In this world, a great deal of the difficulty of interpreting JSON-LD just vanishes. A single account might receive and interpret messages with thousands of custom fields, and yet no single piece of software needs to understand all of them. Clients can and would invest effort interpreting the namespaces relevant to them, and ignore everything else.

I think this is a good idea. But I do not believe this is necessary. And to reiterate, when implementations ignore this and just impose their own fixed schema on the whole of the Fediverse, that’s fine. There is no cause to sneer at simple implementations.

Payoff

Why should we stick to JSON-LD? What does it give us that other approaches cannot?

Ultimately the benefit of JSON-LD is that it saves time during the standardisation process. It allows us to disagree about absolutely everything, up to and including the nature of reality itself. By all accounts, the birth of ActivityPub was not easy at all, despite the absence of established commercial interests and the presence of working prototypes. It’s hard to imagine the implementation of shared moderation being any easier to standardise if we all have to agree on a single definition of “anti-semitism”.

JSON-LD attaches a URL, and therefore a controlling authority, to every ontological atom in our lexicon. It is not realistic that we can agree on anything, but we can at least explicitly attach opinions to participants in the argument. This is built-in to the existing standard.

It took me a long time to internalise the triple-based world view exemplified by RDF. But I believe that effort was worth it – for me, and for anyone who might be reading this blog article. I was very excited when I realised that RDF was effectively the underpinning of ActivityPub. I would be very disappointed to see that potential abandoned.

Leave a Reply

Your email address will not be published. Required fields are marked *