Age | Commit message (Collapse) | Author |
|
|
|
./bin/tootctl media remove --days 7 --background
Make the old rake task point to it
|
|
remove_remote (#8339)
* Fix uncaching worker
* Revert to using Paperclip's filesystem backend instead of fog-local
fog-local has lots of concurrency issues, causing failure to delete files,
dangling file records, and spurious errors UncacheMediaWorker
|
|
|
|
|
|
Regression from #7998 let to profile updates not sending
|
|
* Add federation relay support
* Add admin UI for managing relays
* Include actor on relay-related activities
* Fix i18n
|
|
* Send rejections to followers when user hides domain they're on
* Use account domain blocks for "authorized followers" action
Replace soft-blocking (block & unblock) behaviour with follow rejection
* Split sync and async work of account domain blocking
Do not create domain block when removing followers by domain, that
is probably unexpected from the user's perspective.
* Adjust confirmation message for domain block
* yarn manage:translations
|
|
* Speed up some rake tasks by moving execution to Sidekiq
mastodon:media:remove_silenced
mastodon:media:remove_remote
mastodon:media:redownload_avatars
mastodon:feeds:build
* Fix code style issue
|
|
|
|
(#7541)
* Do not raise delivery failure on 4xx errors, increase stoplight threshold
Stoplight failure threshold from 3 to 10
Status code 429 will raise a failure/get retried
* Oops
|
|
- POST /api/v1/push/subscription
- PUT /api/v1/push/subscription
- DELETE /api/v1/push/subscription
- New OAuth scope: "push" (required for the above methods)
|
|
* Revert "Fixes/do not override timestamps (#7331)"
This reverts commit 581a5c9d29ef2a12f46b67a1097a9ad6df1c6953.
* Document Snowflake ID corner-case a bit more
Snowflake IDs are used for two purposes: making object identifiers harder to
guess and ensuring they are in chronological order. For this reason, they
are based on the `created_at` attribute of the object.
Unfortunately, inserting items with older snowflakes IDs will break the
assumption of consumers of the paging APIs that new items will always have
a greater identifier than the last seen one.
* Add `override_timestamps` virtual attribute to not correlate snowflake ID with created_at
|
|
* Do not override timestamps for incoming toots
* Remove every reference to override_timestamps
Statuses are now created with the announced publishing date
and are only pushed to timelines if that date is at most
6 hours earlier than the time at which it is processed.
|
|
* Revert "Weblate translations 20180503 (#7325)"
This reverts commit dfa6bccb64d9ee5512dddc10afd9a484db2dbb25.
* Revert "Prevent timeline from moving when cursor is hovering over it (fixes #7278) (#7327)"
This reverts commit 58852695c8ec490239ed3812f82971f8c1e6c172.
* Revert "Add pry-byebug (#7307)"
This reverts commit ab773e4d5ffdd78a61d3ebf0f79e60ee5c9f7e92.
* Revert "Do not override timestamps for incoming toots (#7326)"
This reverts commit bd367918328daedb37f49727f4e16e33679fdb15.
|
|
|
|
Offload creation of local notifications to a worker. Remove two
redundant SQL queries from ProcessMentionsService, remove n+1
XML/JSON serialization via memoization
|
|
* No need to re-require sidekiq plugins, they are required via Gemfile
* Add derailed_benchmarks tool, no need to require TTY gems in Gemfile
* Replace ruby-oembed with FetchOEmbedService
Reduce startup by 45382 allocated objects
* Remove preloaded JSON-LD in favour of caching HTTP responses
Reduce boot RAM by about 6 MiB
* Fix tests
* Fix test suite by stubbing out JSON-LD contexts
|
|
* Ensure SynchronizeFeaturedCollectionWorker is unique and clean up
Fix #7041
* Fix code style issue
|
|
|
|
* Adjust privacy policy to be more specific to Mastodon
Fix #6613
* Change data retention of IP addresses from 5 years to 1 year
* Add even more information
* Remove all (now invalid) translations of the privacy policy
* Add information about archive takeout, remove pointless consent section
* Emphasis on DM privacy
* Improve wording
* Add line about data use for moderation purposes
|
|
to_s method of HTTP::Response keeps blocking while it receives the whole
content, no matter how it is big. This means it may waste time to receive
unacceptably large files. It may also consume memory and disk in the
process. This solves the inefficency by checking response length while
receiving.
|
|
HTTP connections must be explicitly closed in many cases, and letting
perform method close connections makes its callers less redundant and
prevent them from forgetting to close connections.
|
|
|
|
* Federate pinned statuses over ActivityPub
* Display pinned toots in web UI
Fix #6117
* Fix migration
* Fix tests
* Update outbox_serializer.rb
* Update remove_serializer.rb
* Update add_serializer.rb
* Update fetch_featured_collection_service.rb
|
|
* Fix #201: Account archive download
* Export actor and private key in the archive
* Optimize BackupService
- Add conversation to cached associations of status, because
somehow it was forgotten and is source of N+1 queries
- Explicitly call GC between batches of records being fetched
(Model class allocations are the worst offender)
- Stream media files into the tar in 1MB chunks
(Do not allocate media file (up to 8MB) as string into memory)
- Use #bytesize instead of #size to calculate file size for JSON
(Fix FileOverflow error)
- Segment media into subfolders by status ID because apparently
GIF-to-MP4 media are all named "media.mp4" for some reason
* Keep uniquely generated filename in Paperclip::GifTranscoder
* Ensure dumped files do not overwrite each other by maintaing directory partitions
* Give tar archives a good name
* Add scheduler to remove week-old backups
* Fix code style issue
|
|
The service used to be named ResolveRemoteAccountService resolves local
accounts as well.
|
|
Currently, Mastodon will retry delivering toots for a bit over 1 hour.
This is a very short timespan when considering private and direct toots, which
cannot be seen by the recipient at all after the delivery attempts have failed.
Ideally, private and direct toots should have a different number of retries,
but I do not know how to do that.
|
|
* Fix regeneration marker not being removed after completion
* Return HTTP 206 from /api/v1/timelines/home if regeneration in progress
Prioritize RegenerationWorker by putting it into default queue
* Display loading indicator and poll home timeline while it regenerates
* Add graphic to regeneration message
* Make "not found" indicator consistent with home regeneration
|
|
|
|
|
|
* When list is deleted, remove feed from redis
* Clean up list feeds of inactive users
|
|
* Avoid sending explicit Undo->Announce when original deleted
* Do not forward a reply back to the server that sent it
* Deduplicate inboxes of rebloggers' followers for delete forwarding
* Adjust test
* Fix wrong class, bad SQL, wrong variable, outdated comment
|
|
* Add structure for lists
* Add list timeline streaming API
* Add list APIs, bind list-account relation to follow relation
* Add API for adding/removing accounts from lists
* Add pagination to lists API
* Add pagination to list accounts API
* Adjust scopes for new APIs
- Creating and modifying lists merely requires "write" scope
- Fetching information about lists merely requires "read" scope
* Add test for wrong user context on list timeline
* Clean up tests
|
|
Thread resolving is one of the few tasks that isn't retried on failure.
One common cause for failure of this task is a well-connected user replying to
a toot from a little-connected user on a small instance: the small instance
will get many requests at once, and will often fail to answer requests within
the 10 seconds timeout used by Mastodon.
This changes makes the ThreadResolveWorker retry a few times, with a
rapidly-increasing time before retries and large random contribution in order
to spread the load over time.
|
|
Fix #5597
|
|
* Clean up reblog-tracking sets from FeedManager
Builds on #5419, with a few minor optimizations and cleanup of sets
after they are no longer needed.
* Update tests, fix multiply-reblogged case
Previously, we would have lost the fact that a given status was
reblogged if the displayed reblog of it was removed, now we don't.
Also added tests to make sure FeedManager#trim cleans up our reblog
tracking keys, fixed up FeedCleanupScheduler to use the right loop,
and fixed the test for it.
|
|
* Keep references to all reblogs of a status on home feed
When inserting reblog: Add to set of reblogs of this status on
the feed, if original status was present in the feed, add it to
that set as well.
When removing a reblog: Remove it from that set. Take random
remaining item from the set. If one exists, re-insert it into feed,
otherwise do not re-insert anything.
Fix #4210
* When original is removed, toss out reblog references
|
|
* Close connection when succeeded posting
* Update webmock
|
|
- Rename Mastodon::TimestampIds into Mastodon::Snowflake for clarity
- Skip for statuses coming from inbox, aka delivered in real-time
- Skip for statuses that claim to be from the future
|
|
* Improve error handling on LinkCrawlWorker
* Ignore TimeoutError and InvalidURIError too
* Record errors to debug log
* Enable dead job queue on LinkCrawlWorker
Since most of acceptable errors were already ignored, only our side issue should go to dead job queue.
* Ignore all http gem errors
|
|
|
|
|
|
- A successful delivery cancels it out
- An incoming delivery from account of the inbox cancels it out
|
|
|
|
|
|
|
|
Reply distribution is proceed by Sidekiq, so replied status may be deleted before this.
|
|
* Add scheduled worker to purge old user IPs
* Use ruby 1.9 hash syntax
|
|
* Revert "Enable UniqueRetryJobMiddleware even when called from sidekiq worker (#4836)"
This reverts commit 6859d4c0289e767955aac3f345074220fe200604.
* Revert "Do not execute the job with the same arguments as the retry job (#4814)"
This reverts commit be7ffa2d7539d5a1946a3933cb9d242b9fac0ddc.
|