Avatar @zvava

ok so i have found a genuine twt hash collision. what do i do.

internally, bbycll relies on a post lookup table with post hashes as keys, this is really fast but i knew i'd inevitably run into this issue (just not so soon) so now i have to either:
  1) pick the newer post over the other
  2) break from specification and not lowercase hashes
  3) secretly associate canonical urls or additional entropy with post hashes in the backend without a sizeable performance impact somehow

vor ≈2 Monaten | #6ishh6q |
Antworten zu #6ishh6q von @zvava
Avatar @prologic | #6ishh6q

@zvava we have to amend the spec and increase the hash length. We just haven't done so yet 😆

vor ≈2 Monaten | #vnph4ma |
Avatar @zvava | #6ishh6q

@prologic i just added timeline refresh to bbycll and it is so convincing i almost replied to you from there hehe, can i get a link pretty please :o

vor ≈2 Monaten | #bxfzeaa |
Antworten zu #tm3naga von @lyse
Antworten zu #zqxcq3a von @prologic
Avatar @zvava | #zqxcq3a

@prologic im unsure how i feel about the hash v2 proposal, given it is completely backward incompatible with hash v1 it doesn't really solve any of the problems with it. it only delays collisions, and still fragments threads on post edits

i skimmed through discussions under other the proposals — i agree humans are very bad at keeping the integrity of the web in tact, but hashes in done in this way make it impossible even for systems to rebuild threads if any post edits have occurred prior to their deployment

vor ≈2 Monaten | #nzs23fa |
Antworten zu #nzs23fa von @zvava
Avatar @lyse | #nzs23fa

@zvava It is just completely impossible to make v2 backwards-compatible with v1.

Well, breaking threads on edits is considered a feature by some people. I reckon the only approach to reasonably deal with that property is to carefully review messages before publishing them, thus delaying feed updates. Any typos etc., that have been discovered afterwards, are just left alone. That's what I and some others do. I only risk editing if the feed has been published very few seconds earlier. More than 20 seconds and I just ignore it. Works alright for the most part.

vor ≈2 Monaten | #axrtzga |
Avatar @zvava | #nzs23fa

@lyse i dont mind if the hash is not backward compatible but im not sure if this is the right way to proceed because the added complexity dealing with two hash versions isnt justified

regular end users wont care to understand how twt hashes are formed, they just want to use twtxt! so i guess i could work in protecting users from themselves by disallowing post edits on old posts or posts with replies, but i'm not fond of this either really. if they want to break a thread, they can just delete the post (though i've noticed yarn handling post deletes dubiously...)

on activitypub i do genuinely find myself looking through several month or even year old posts sometimes and deciding to edit/reword them a little to be slightly less confusing, this should be trivial to handle on twtxt which is an infinitely simpler specification

vor ≈2 Monaten | #dvw775q |
Antworten zu #dvw775q von @zvava
Avatar @lyse | #dvw775q

@zvava There would be only one hash for a message. Some to be defined magic date selects which hash to use. If the message creation timestamp is before this epoch, hash it with v1, otherwise hammer it through v2. Eventually, support for v1 could be dropped as nobody interacts with the old stuff anymore. But I'd keep it around in my client, because why not.

If users choose a client which supports the extensions, they don't have to mess around with v1 and v2 hashing, just like today.

As for the school of thought, personally, I'd prefer something else, too. I'm in camp location-based addressing, or whatever it is called. There more I think about it, a complete redesign of twtxt and its extensions would be necessary in my opinion. Retrofitting has its limits. Of course, this is much more work, though.

vor ≈2 Monaten | #tu6eela |