"how to not regret c2s"w.on-t.work/activitypub/c2smy opinions about how activitypub c2s ought to be implemented, probably with way too much snark for anyone to take it seriously.
-
i guess you could make a frontend that talks both APIs natively and picks which one to use depending on how you log in, which would avoid the going evil bit, but that increases complexity on the user's device a fair bit (likely manageable though) and still requires you to proxy the content of the other network as trying to fetch it from the client will either rate-limit/be slow or cause auth failures (e.g. an atproto login trying to fetch AP objects)now that i think about it, you would need actors and logins for both sides anyway if you're doing bridging, this is prolly not an issue
-
now that i think about it, you would need actors and logins for both sides anyway if you're doing bridging, this is prolly not an issuebad ideas:
xrpc/com.w3.activitystreams.proxyUrl?id=https://..{ "endpoints": { "xrpcProxyUrl": "https://..." } } -
bad ideas:
xrpc/com.w3.activitystreams.proxyUrl?id=https://..{ "endpoints": { "xrpcProxyUrl": "https://..." } }a XRPC endpoint returning AS2-compatible data would be funny
-
@jb i should look this up do lexicons have any places they can have unrestricted json in or would it have to be a string or something
-
@jb i should look this up do lexicons have any places they can have unrestricted json in or would it have to be a string or something
lexicons do have unrestricted json, lexicon validation are there but its not required
-
@jb no i think validation is important
there is an unknown type butThe (nested) contents of the data object must still be valid under the atproto data model. For example, it should not contain floats. Nested compound types like blobs and CID links should be validated and transformed as expected.
-
"how to not regret c2s"
w.on-t.work/activitypub/c2s
my opinions about how activitypub c2s ought to be implemented, probably with way too much snark for anyone to take it seriously. wrote pretty much all of it at like 1 am so expect the writing to not be great. will prolly regret it tomorrow but eh. whatever
#activityPub #fediDevs@kopper nah it's good shit. i've been meaning to compile all my thoughts on the subject but you basically did it for me with this piece

-
"how to not regret c2s"
w.on-t.work/activitypub/c2s
my opinions about how activitypub c2s ought to be implemented, probably with way too much snark for anyone to take it seriously. wrote pretty much all of it at like 1 am so expect the writing to not be great. will prolly regret it tomorrow but eh. whatever
#activityPub #fediDevs> separate data hosting from data interpretation
https://activitypub.mushroomlabs.com/topics/reference_context_architecture/> with many objects and slow connections the overhead of creating new http requests would add up significantly
GraphQL and/or SPARQL
> iterating through my inbox every time i open my “home timeline”
The same your email client works: sync the inbox, create index view in the clients. That's at least what I am doing for my C2S client. My problem now is that I wish that C2S had a way to say "clear my inbox"
-
> separate data hosting from data interpretation
https://activitypub.mushroomlabs.com/topics/reference_context_architecture/> with many objects and slow connections the overhead of creating new http requests would add up significantly
GraphQL and/or SPARQL
> iterating through my inbox every time i open my “home timeline”
The same your email client works: sync the inbox, create index view in the clients. That's at least what I am doing for my C2S client. My problem now is that I wish that C2S had a way to say "clear my inbox"
@raphaelGraphQL and/or SPARQL
these can be valid options but they add complexity to the C2S server, which is both the part that individual users would want to self-host AND also the part already burdened by having to deal with the traffic bursts caused by things like boosts and replies by large accounts. i think if a query endpoint were to be created it should be maintained by a client (at least architecturally, if an implementation wants to merge them both it's their choice!)The same your email client works: sync the inbox, create index view in the clients.
this still does not address the problem. an actor's inbox contains way more than just the Posts on their Timelines (and will do even more if AP were to be the Everything Protocol it dreams on being). you'd need to load pages upon pages of as:Like, litepub:EmojiReact (and as:Undo for both), as:Listen (e.g. following someone using pleroma scrobbles), various as:Add as:Remove as:Update as:Delete "management" activities for unknown objects, and so on which will swiftly get thrown away but still end up consuming latency (especially after a few days of being away) and often-costly mobile bandwidth
email clients work because your inbox Only contains Emails. the data you get is pretty much all relevant, you don't end up discarding huge chunks of it -
@raphael
GraphQL and/or SPARQL
these can be valid options but they add complexity to the C2S server, which is both the part that individual users would want to self-host AND also the part already burdened by having to deal with the traffic bursts caused by things like boosts and replies by large accounts. i think if a query endpoint were to be created it should be maintained by a client (at least architecturally, if an implementation wants to merge them both it's their choice!)The same your email client works: sync the inbox, create index view in the clients.
this still does not address the problem. an actor's inbox contains way more than just the Posts on their Timelines (and will do even more if AP were to be the Everything Protocol it dreams on being). you'd need to load pages upon pages of as:Like, litepub:EmojiReact (and as:Undo for both), as:Listen (e.g. following someone using pleroma scrobbles), various as:Add as:Remove as:Update as:Delete "management" activities for unknown objects, and so on which will swiftly get thrown away but still end up consuming latency (especially after a few days of being away) and often-costly mobile bandwidth
email clients work because your inbox Only contains Emails. the data you get is pretty much all relevant, you don't end up discarding huge chunks of it@raphael i also have personal opinions around graphql/sparql such as the ability for a client to create slow and resource-consuming queries and concerns around how a shared C2S server is supposed to rate limit those, but given graphql's popularity these already have some discussion and acceptable solutions, though we really don't need facebook-scale tooling for this -
@raphael i also have personal opinions around graphql/sparql such as the ability for a client to create slow and resource-consuming queries and concerns around how a shared C2S server is supposed to rate limit those, but given graphql's popularity these already have some discussion and acceptable solutions, though we really don't need facebook-scale tooling for this
This is what I am planning to do to deal with huge inboxes: https://codeberg.org/mushroomlabs/django-activitypub-toolkit/issues/31
As for SPARQL/GraphQL: yes, if I am syncing all the data (that I want) to my local database, I'd implement the query engine *in the local client*.
And I am not even thinking about discarding anything. JSON can compress nicely, so I'd keep an actual database for the indexing and JSON-LD documents as a local archive.
Hello! It looks like you're interested in this conversation, but you don't have an account yet.
Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.
With your input, this post could be even better 💗
Register Login