about summary refs log tree commit diff
path: root/app/workers/thread_resolve_worker.rb
AgeCommit message (Collapse)Author
2023-02-10Fix unbounded recursion in post discovery (#23506)Claire
* Add a limit to how many posts can get fetched as a result of a single request * Add tests * Always pass `request_id` when processing `Announce` activities --------- Co-authored-by: nametoolong <nametoolong@users.noreply.github.com>
2021-04-26Fix thread resolve worker retrying when status no longer exists (#16109)Eugen Rochko
2019-02-28Improved remote thread fetching (#10106)ThibG
* Fetch up to 5 replies when discovering a new remote status This is used for resolving threads downwards. The originating server must add a “replies” attributes with such replies for it to be useful. * Add some tests for ActivityPub::FetchRepliesWorker * Add specs for ActivityPub::FetchRepliesService * Serialize up to 5 public self-replies for ActivityPub notes * Add specs for ActivityPub::NoteSerializer * Move exponential backoff logic to a worker concern * Fetch first page of paginated collections when fetching thread replies * Add specs for paginated collections in replies * Move Note replies serialization to a first CollectionPage The collection isn't actually paginable yet as it has no id nor a `next` field. This may come in another PR. * Use pluck(:uri) instead of map(&:uri) to improve performances * Fix fetching replies when they are in a CollectionPage
2017-11-11Retry thread resolving (#5599)ThibG
Thread resolving is one of the few tasks that isn't retried on failure. One common cause for failure of this task is a well-connected user replying to a toot from a little-connected user on a small instance: the small instance will get many requests at once, and will often fail to answer requests within the 10 seconds timeout used by Mastodon. This changes makes the ThreadResolveWorker retry a few times, with a rapidly-increasing time before retries and large random contribution in order to spread the load over time.
2017-04-04Separate background jobs into different queues. ATTENTION: new queue "pull"Eugen Rochko
must be added to the Sidekiq invokation in your systemd file The pull queue will handle link crawling, thread resolving, and OStatus processing. Such tasks are more likely to hang for a longer time (due to network requests) so it is more sensible to not make the "in-house" tasks wait for them.
2017-01-05Improve background jobs params and error handlingEugen Rochko
2016-12-12Restoring old async behaviour of thread resolving as it proved to be more robustEugen Rochko
2016-12-11Thread resolving no longer needs to be separate from ProcessFeedService,Eugen Rochko
since that is only ever called in the background
2016-11-28Adding embedded PuSH serverEugen Rochko
2016-11-15Fix rubocop issues, introduce usage of frozen literal to improve performanceEugen Rochko
2016-09-29Improve code styleEugen Rochko
2016-09-21Fix #24 - Thread resolving for remote statusesEugen Rochko
This is a big one, so let me enumerate: Accounts as well as stream entry pages now contain Link headers that reference the Atom feed and Webfinger URL for the former and Atom entry for the latter. So you only need to HEAD those resources to get that information, no need to download and parse HTML <link>s. ProcessFeedService will now queue ThreadResolveWorker for each remote status that it cannot find otherwise. Furthermore, entries are now processed in reverse order (from bottom to top) in case a newer entry references a chronologically previous one. ThreadResolveWorker uses FetchRemoteStatusService to obtain a status and attach the child status it was queued for to it. FetchRemoteStatusService looks up the URL, first with a HEAD, tests if it's an Atom feed, in which case it processes it directly. Next for Link headers to the Atom feed, in which case that is fetched and processed. Lastly if it's HTML, it is checked for <link>s to the Atom feed, and if such is found, that is fetched and processed. The account for the status is derived from author/name attribute in the XML and the hostname in the URL (domain). FollowRemoteAccountService and ProcessFeedService are used. This means that potentially threads are resolved recursively until a dead-end is encountered, however it is performed asynchronously over background jobs, so it should be ok.