Age | Commit message (Collapse) | Author |
|
* Add support for followers synchronization on the receiving end
Check the `collectionSynchronization` attribute on `Create` and `Announce`
activities and synchronize followers from provided collection if possible.
* Add tests for followers synchronization on the receiving end
* Add support for follower synchronization on the sender's end
* Add tests for the sending end
* Switch from AS attributes to HTTP header
Replace the custom `collectionSynchronization` ActivityStreams attribute by
an HTTP header (`X-AS-Collection-Synchronization`) with the same syntax as
the `Signature` header and the following fields:
- `collectionId` to specify which collection to synchronize
- `digest` for the SHA256 hex-digest of the list of followers known on the
receiving instance (where “receiving instance” is determined by accounts
sharing the same host name for their ActivityPub actor `id`)
- `url` of a collection that should be fetched by the instance actor
Internally, move away from the webfinger-based `domain` attribute and use
account `uri` prefix to group accounts.
* Add environment variable to disable followers synchronization
Since the whole mechanism relies on some new preconditions that, in some
extremely rare cases, might not be met, add an environment variable
(DISABLE_FOLLOWERS_SYNCHRONIZATION) to disable the mechanism altogether and
avoid followers being incorrectly removed.
The current conditions are:
1. all managed accounts' actor `id` and inbox URL have the same URI scheme and
netloc.
2. all accounts whose actor `id` or inbox URL share the same URI scheme and
netloc as a managed account must be managed by the same Mastodon instance
as well.
As far as Mastodon is concerned, breaking those preconditions require extensive
configuration changes in the reverse proxy and might also cause other issues.
Therefore, this environment variable provides a way out for people with highly
unusual configurations, and can be safely ignored for the overwhelming majority
of Mastodon administrators.
* Only set follower synchronization header on non-public statuses
This is to avoid unnecessary computations and allow Follow-related
activities to be handled by the usual codepath instead of going through
the synchronization mechanism (otherwise, any Follow/Undo/Accept activity
would trigger the synchronization mechanism even if processing the activity
itself would be enough to re-introduce synchronization)
* Change how ActivityPub::SynchronizeFollowersService handles follow requests
If the remote lists a local follower which we only know has sent a follow
request, consider the follow request as accepted instead of sending an Undo.
* Integrate review feeback
- rename X-AS-Collection-Synchronization to Collection-Synchronization
- various minor refactoring and code style changes
* Only select required fields when computing followers_hash
* Use actor URI rather than webfinger domain in synchronization endpoint
* Change hash computation to be a XOR of individual hashes
Makes it much easier to be memory-efficient, and avoid sorting discrepancy issues.
* Marginally improve followers_hash computation speed
* Further improve hash computation performances by using pluck_each
|
|
* Changed the number of retries and rescued exceptions in ActivityPub::ProcessingWorker
* Remove RecordNotUnique from rescue
|
|
* Change move handler to carry blocks and mutes over
When user A blocks user B and B moves to a new account C, make A block C
accordingly.
Note that it only works if A's instance is aware of the Move, that is,
if B is on A's instance or has followers there.
* Also notify instances with known people blocking you when moving
* Add automatic account notes when blocking/muting an account that had no note
|
|
|
|
Fix #13086, close #13113
|
|
(#13482)
|
|
|
|
Also:
- Fix locks not being removed when jobs go to the dead job queue
- Add UI for managing locks to the Sidekiq dashboard
- Remove unused Sidekiq workers
Fix #13349
|
|
This is to suppress irrelevant backtrace from errors raised when
delivering toots to remote servers. The errors are usually out of
control by the local server and backtraces don't provide much
information.
This is similar to https://github.com/tootsuite/mastodon/pull/5174
and shortens backtraces like below:
```
WARN: Mastodon::UnexpectedResponseError: https://example.com/inbox returned code 523
WARN: app/workers/activitypub/delivery_worker.rb:48:in `block (3 levels) in perform_request'
app/lib/request.rb:75:in `perform'
app/workers/activitypub/delivery_worker.rb:47:in `block (2 levels) in perform_request'
app/lib/request_pool.rb:53:in `use'
app/lib/request_pool.rb:108:in `block (2 levels) in with'
vendor/bundle/ruby/2.7.0/gems/activesupport-5.2.4.1/lib/active_support/notifications.rb:170:in `instrument'
app/lib/request_pool.rb:107:in `block in with'
app/lib/connection_pool/shared_connection_pool.rb:21:in `block (2 levels) in with'
app/lib/connection_pool/shared_connection_pool.rb:20:in `handle_interrupt'
app/lib/connection_pool/shared_connection_pool.rb:20:in `block in with'
app/lib/connection_pool/shared_connection_pool.rb:16:in `handle_interrupt'
app/lib/connection_pool/shared_connection_pool.rb:16:in `with'
app/lib/request_pool.rb:106:in `with'
app/workers/activitypub/delivery_worker.rb:46:in `block in perform_request'
vendor/bundle/ruby/2.7.0/gems/stoplight-2.2.0/lib/stoplight/light/runnable.rb:51:in `run_code'
vendor/bundle/ruby/2.7.0/gems/stoplight-2.2.0/lib/stoplight/light/runnable.rb:42:in `run_yellow'
vendor/bundle/ruby/2.7.0/gems/stoplight-2.2.0/lib/stoplight/light/runnable.rb:24:in `run'
app/workers/activitypub/delivery_worker.rb:57:in `perform_request'
app/workers/activitypub/delivery_worker.rb:25:in `perform'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:192:in `execute_job'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:165:in `block (2 levels) in process'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:128:in `block in invoke'
vendor/bundle/ruby/2.7.0/gems/nsa-0.2.7/lib/nsa/collectors/sidekiq.rb:31:in `block in call'
vendor/bundle/ruby/2.7.0/gems/nsa-0.2.7/lib/nsa/statsd/publisher.rb:27:in `statsd_time'
vendor/bundle/ruby/2.7.0/gems/nsa-0.2.7/lib/nsa/collectors/sidekiq.rb:30:in `call'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:130:in `block in invoke'
app/lib/sidekiq_error_handler.rb:5:in `call'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:130:in `block in invoke'
vendor/bundle/ruby/2.7.0/gems/scout_apm-2.3.0.pre3/lib/scout_apm/background_job_integrations/sidekiq.rb:69:in `call'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:130:in `block in invoke'
vendor/bundle/ruby/2.7.0/gems/sidekiq-unique-jobs-6.0.18/lib/sidekiq_unique_jobs/server/middleware.rb:29:in `call'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:130:in `block in invoke'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:133:in `invoke'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:164:in `block in process'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:137:in `block (6 levels) in dispatch'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/job_retry.rb:109:in `local'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:136:in `block (5 levels) in dispatch'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/rails.rb:43:in `block in call'
vendor/bundle/ruby/2.7.0/gems/activesupport-5.2.4.1/lib/active_support/execution_wrapper.rb:87:in `wrap'
vendor/bundle/ruby/2.7.0/gems/activesupport-5.2.4.1/lib/active_support/reloader.rb:73:in `block in wrap'
vendor/bundle/ruby/2.7.0/gems/activesupport-5.2.4.1/lib/active_support/execution_wrapper.rb:87:in `wrap'
vendor/bundle/ruby/2.7.0/gems/activesupport-5.2.4.1/lib/active_support/reloader.rb:72:in `wrap'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/rails.rb:42:in `call'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:132:in `block (4 levels) in dispatch'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:250:in `stats'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:127:in `block (3 levels) in dispatch'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/job_logger.rb:8:in `call'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:126:in `block (2 levels) in dispatch'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/job_retry.rb:74:in `global'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:125:in `block in dispatch'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/logging.rb:48:in `with_context'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/logging.rb:42:in `with_job_hash_context'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:124:in `dispatch'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:163:in `process'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:83:in `process_one'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:71:in `run'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/util.rb:16:in `watchdog'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/util.rb:25:in `block in safe_thread'
```
```
WARN: Stoplight::Error::RedLight: https://example.com/inbox
WARN: vendor/bundle/ruby/2.7.0/gems/stoplight-2.2.0/lib/stoplight/light/runnable.rb:46:in `run_red'
vendor/bundle/ruby/2.7.0/gems/stoplight-2.2.0/lib/stoplight/light/runnable.rb:25:in `run'
app/workers/activitypub/delivery_worker.rb:57:in `perform_request'
app/workers/activitypub/delivery_worker.rb:25:in `perform'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:192:in `execute_job'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:165:in `block (2 levels) in process'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:128:in `block in invoke'
vendor/bundle/ruby/2.7.0/gems/nsa-0.2.7/lib/nsa/collectors/sidekiq.rb:31:in `block in call'
vendor/bundle/ruby/2.7.0/gems/nsa-0.2.7/lib/nsa/statsd/publisher.rb:27:in `statsd_time'
vendor/bundle/ruby/2.7.0/gems/nsa-0.2.7/lib/nsa/collectors/sidekiq.rb:30:in `call'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:130:in `block in invoke'
app/lib/sidekiq_error_handler.rb:5:in `call'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:130:in `block in invoke'
vendor/bundle/ruby/2.7.0/gems/scout_apm-2.3.0.pre3/lib/scout_apm/background_job_integrations/sidekiq.rb:69:in `call'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:130:in `block in invoke'
vendor/bundle/ruby/2.7.0/gems/sidekiq-unique-jobs-6.0.18/lib/sidekiq_unique_jobs/server/middleware.rb:29:in `call'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:130:in `block in invoke'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:133:in `invoke'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:164:in `block in process'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:137:in `block (6 levels) in dispatch'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/job_retry.rb:109:in `local'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:136:in `block (5 levels) in dispatch'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/rails.rb:43:in `block in call'
vendor/bundle/ruby/2.7.0/gems/activesupport-5.2.4.1/lib/active_support/execution_wrapper.rb:87:in `wrap'
vendor/bundle/ruby/2.7.0/gems/activesupport-5.2.4.1/lib/active_support/reloader.rb:73:in `block in wrap'
vendor/bundle/ruby/2.7.0/gems/activesupport-5.2.4.1/lib/active_support/execution_wrapper.rb:87:in `wrap'
vendor/bundle/ruby/2.7.0/gems/activesupport-5.2.4.1/lib/active_support/reloader.rb:72:in `wrap'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/rails.rb:42:in `call'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:132:in `block (4 levels) in dispatch'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:250:in `stats'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:127:in `block (3 levels) in dispatch'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/job_logger.rb:8:in `call'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:126:in `block (2 levels) in dispatch'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/job_retry.rb:74:in `global'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:125:in `block in dispatch'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/logging.rb:48:in `with_context'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/logging.rb:42:in `with_job_hash_context'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:124:in `dispatch'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:163:in `process'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:83:in `process_one'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:71:in `run'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/util.rb:16:in `watchdog'
vendor/bundle/ruby/2.7.0/gems/sidekiq-5.2.7/lib/sidekiq/util.rb:25:in `block in safe_thread'
```
|
|
|
|
Fix #10736
- Change data export to be available for non-functional accounts
- Change non-functional accounts to include redirecting accounts
|
|
|
|
|
|
|
|
* Add request pool to improve delivery performance
Fix #7909
* Ensure connection is closed when exception interrupts execution
* Remove Timeout#timeout from socket connection
* Fix infinite retrial loop on HTTP::ConnectionError
* Close sockets on failure, reduce idle time to 90 seconds
* Add MAX_REQUEST_POOL_SIZE option to limit concurrent connections to the same server
* Use a shared pool size, 512 by default, to stay below open file limit
* Add some tests
* Add more tests
* Reduce MAX_IDLE_TIME from 90 to 30 seconds, reap every 30 seconds
* Use a shared pool that returns preferred connection but re-purposes other ones when needed
* Fix wrong connection being returned on subsequent calls within the same thread
* Reduce mutex calls on flushes from 2 to 1 and add test for reaping
|
|
concern (#10966)
|
|
HTTP 401 responses returned by Mastodon's inbox controller may
be temporary if, for instance, the requesting user's actor/key json
could not be retrieved in a timely fashion. This changes allow retries
instead of dropping the message entirely.
Also added HTTP 408 as that error is by nature temporary.
|
|
* Do not retry processing ActivityPub jobs raising validation errors
Jobs yielding validation errors most probably won't ever be accepted,
so it makes sense not to clutter the queues with retries.
* Lower RecordInvalid error reporting to debug log level
* Remove trailing whitespace
|
|
Also, fix some n+1 queries
Resolve #10365
|
|
* Fix poll update handler calling method was that was not available
Fix regression from #10209
* Refactor VoteService
* Refactor ActivityPub::DistributePollUpdateWorker and optimize it
* Fix typo
* Fix typo
|
|
* Process incoming poll tallies update
* Send Update on poll vote
* Do not send Updates for a poll more often than once every 3 minutes
* Include voters in people to notify of results update
* Schedule closing poll worker on poll creation
* Add new notification type for ending polls
* Add front-end support for ended poll notifications
* Fix UpdatePollSerializer
* Fix Updates not being triggered by local votes
* Fix tests failure
* Fix web push notifications for closing polls
* Minor cleanup
* Notify voters of both remote and local polls when those close
* Fix delivery of poll updates to mentioned accounts and voters
|
|
* Do not error out on unsalvageable errors in FetchRepliesService
Fixes #10152
* Fix FetchRepliesWorker erroring out on deleted statuses
|
|
* Fetch up to 5 replies when discovering a new remote status
This is used for resolving threads downwards. The originating
server must add a “replies” attributes with such replies for it to
be useful.
* Add some tests for ActivityPub::FetchRepliesWorker
* Add specs for ActivityPub::FetchRepliesService
* Serialize up to 5 public self-replies for ActivityPub notes
* Add specs for ActivityPub::NoteSerializer
* Move exponential backoff logic to a worker concern
* Fetch first page of paginated collections when fetching thread replies
* Add specs for paginated collections in replies
* Move Note replies serialization to a first CollectionPage
The collection isn't actually paginable yet as it has no id nor
a `next` field. This may come in another PR.
* Use pluck(:uri) instead of map(&:uri) to improve performances
* Fix fetching replies when they are in a CollectionPage
|
|
Reject those from accounts with no local followers, from relays
that are not enabled, which do not address local accounts and are
not replies to accounts that do have local followers
|
|
pull̀ queue (#10016)
|
|
|
|
DistributionWorker (#9660)
|
|
* Do not LDS-sign Follow, Accept, Reject, Undo, Block
* Do not use LDS for Create activities of private toots
* Minor cleanup
* Ignore unsigned activities instead of misattributing them
* Use status.distributable? instead of querying visibility directly
|
|
|
|
Fix #9091
|
|
* Add silent column to mentions
* Save silent mentions in ActivityPub Create handler and optimize it
Move networking calls out of the database transaction
* Add "limited" visibility level masked as "private" in the API
Unlike DMs, limited statuses are pushed into home feeds. The access
control rules between direct and limited statuses is almost the same,
except for counter and conversation logic
* Ensure silent column is non-null, add spec
* Ensure filters don't check silent mentions for blocks/mutes
As those are "this person is also allowed to see" rather than "this
person is involved", therefore does not warrant filtering
* Clean up code
* Use Status#active_mentions to limit returned mentions
* Fix code style issues
* Use Status#active_mentions in Notification
And remove stream_entry eager-loading from Notification
|
|
* If an Update is signed with known key, skip re-following procedure
Because it means the remote actor did *not* lose their database
* Add CLI method for rotating keys
bin/tootctl accounts rotate [USERNAME]
Generates a new RSA key per account and sends out an Update activity
signed with the old key.
* Key rotation: Space out Update fan-outs every 5 minutes per 1000 accounts
* Skip suspended accounts in key rotation
|
|
Regression from #7998 let to profile updates not sending
|
|
* Add federation relay support
* Add admin UI for managing relays
* Include actor on relay-related activities
* Fix i18n
|
|
|
|
(#7541)
* Do not raise delivery failure on 4xx errors, increase stoplight threshold
Stoplight failure threshold from 3 to 10
Status code 429 will raise a failure/get retried
* Oops
|
|
* Revert "Fixes/do not override timestamps (#7331)"
This reverts commit 581a5c9d29ef2a12f46b67a1097a9ad6df1c6953.
* Document Snowflake ID corner-case a bit more
Snowflake IDs are used for two purposes: making object identifiers harder to
guess and ensuring they are in chronological order. For this reason, they
are based on the `created_at` attribute of the object.
Unfortunately, inserting items with older snowflakes IDs will break the
assumption of consumers of the paging APIs that new items will always have
a greater identifier than the last seen one.
* Add `override_timestamps` virtual attribute to not correlate snowflake ID with created_at
|
|
* Do not override timestamps for incoming toots
* Remove every reference to override_timestamps
Statuses are now created with the announced publishing date
and are only pushed to timelines if that date is at most
6 hours earlier than the time at which it is processed.
|
|
* Revert "Weblate translations 20180503 (#7325)"
This reverts commit dfa6bccb64d9ee5512dddc10afd9a484db2dbb25.
* Revert "Prevent timeline from moving when cursor is hovering over it (fixes #7278) (#7327)"
This reverts commit 58852695c8ec490239ed3812f82971f8c1e6c172.
* Revert "Add pry-byebug (#7307)"
This reverts commit ab773e4d5ffdd78a61d3ebf0f79e60ee5c9f7e92.
* Revert "Do not override timestamps for incoming toots (#7326)"
This reverts commit bd367918328daedb37f49727f4e16e33679fdb15.
|
|
|
|
* Ensure SynchronizeFeaturedCollectionWorker is unique and clean up
Fix #7041
* Fix code style issue
|
|
|
|
HTTP connections must be explicitly closed in many cases, and letting
perform method close connections makes its callers less redundant and
prevent them from forgetting to close connections.
|
|
* Federate pinned statuses over ActivityPub
* Display pinned toots in web UI
Fix #6117
* Fix migration
* Fix tests
* Update outbox_serializer.rb
* Update remove_serializer.rb
* Update add_serializer.rb
* Update fetch_featured_collection_service.rb
|
|
Currently, Mastodon will retry delivering toots for a bit over 1 hour.
This is a very short timespan when considering private and direct toots, which
cannot be seen by the recipient at all after the delivery attempts have failed.
Ideally, private and direct toots should have a different number of retries,
but I do not know how to do that.
|
|
* Avoid sending explicit Undo->Announce when original deleted
* Do not forward a reply back to the server that sent it
* Deduplicate inboxes of rebloggers' followers for delete forwarding
* Adjust test
* Fix wrong class, bad SQL, wrong variable, outdated comment
|
|
* Close connection when succeeded posting
* Update webmock
|
|
- Rename Mastodon::TimestampIds into Mastodon::Snowflake for clarity
- Skip for statuses coming from inbox, aka delivered in real-time
- Skip for statuses that claim to be from the future
|
|
|
|
- A successful delivery cancels it out
- An incoming delivery from account of the inbox cancels it out
|