I've started my exploration of using @timbray's Quamina project for saving some compute time in the filters module of #GoActivityPub
-
I've started my exploration of using @timbray's Quamina project for saving some compute time in the filters module of #GoActivityPub
Currently the GoAP storage backends iterate over resources (usually stored as raw JSON bytes), unmarshal them into GoActivityPub object structs, and *only* then apply the custom filtering logic on those objects. Since the majority of the objects generally fail the filtering logic, all that JSON decoding is wasted compute time and makes things slower.
Ideally quamina will allow me to check the raw JSON payloads directly against the filters, streamlining the execution and speeding things up.
-
? Guest crossposted this topic to General Discussion
-
I've started my exploration of using @timbray's Quamina project for saving some compute time in the filters module of #GoActivityPub
Currently the GoAP storage backends iterate over resources (usually stored as raw JSON bytes), unmarshal them into GoActivityPub object structs, and *only* then apply the custom filtering logic on those objects. Since the majority of the objects generally fail the filtering logic, all that JSON decoding is wasted compute time and makes things slower.
Ideally quamina will allow me to check the raw JSON payloads directly against the filters, streamlining the execution and speeding things up.
Sadly adding quamina didn't bring any meaningful changes to the integration test suite I'm using for my federated server, probably because the amount of data they handle is way too low and the overhead of running the application and testsuite is way too high.
It looks like I need to build some artificial benchmarks handling strictly the storage fetches.
-
Sadly adding quamina didn't bring any meaningful changes to the integration test suite I'm using for my federated server, probably because the amount of data they handle is way too low and the overhead of running the application and testsuite is way too high.
It looks like I need to build some artificial benchmarks handling strictly the storage fetches.
Well, benchmarking doesn't help either, the measurements are so noisy that I can't even make any inferences from them.
I'm not sure how I can isolate the tests even more.
Perhaps the issue is that all the tests rely on the actual filesystem.
Maybe I need to find a memory backed filesystem mock...
-
Well, benchmarking doesn't help either, the measurements are so noisy that I can't even make any inferences from them.
I'm not sure how I can isolate the tests even more.
Perhaps the issue is that all the tests rely on the actual filesystem.
Maybe I need to find a memory backed filesystem mock...
I realized I already run on a memory backed filesystem, as the default testing.T.TempDir() returns a path in /tmp which is tmpfs for the machine where I'm testing.
Gaaah!!!

Hello! It looks like you're interested in this conversation, but you don't have an account yet.
Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.
With your input, this post could be even better 💗
Register Login