about summary refs log tree commit diff
path: root/db/migrate/20161203164520_add_from_account_id_to_notifications.rb
diff options
context:
space:
mode:
authorDavid Yip <yipdw@member.fsf.org>2017-10-15 19:49:22 -0500
committerDavid Yip <yipdw@member.fsf.org>2017-10-21 14:54:36 -0500
commit4a64181461cb02599da98166da4b527adbb705ad (patch)
tree6c45a1e201c3628e8a2f822f9f92aa81e5b651f0 /db/migrate/20161203164520_add_from_account_id_to_notifications.rb
parent2e03a10059889cb05d4fab7736447a4315f90bf5 (diff)
Allow keywords to match either substrings or whole words.
Word-boundary matching only works as intended in English and languages
that use similar word-breaking characters; it doesn't work so well in
(say) Japanese, Chinese, or Thai.  It's unacceptable to have a feature
that doesn't work as intended for some languages.  (Moreso especially
considering that it's likely that the largest contingent on the Mastodon
bit of the fediverse speaks Japanese.)

There are rules specified in Unicode TR29[1] for word-breaking across
all languages supported by Unicode, but the rules deliberately do not
cover all cases.  In fact, TR29 states

    For example, reliable detection of word boundaries in languages such
    as Thai, Lao, Chinese, or Japanese requires the use of dictionary
    lookup, analogous to English hyphenation.

So we aren't going to be able to make word detection work with regexes
within Mastodon (or glitchsoc).  However, for a first pass (even if it's
kind of punting) we can allow the user to choose whether they want word
or substring detection and warn about the limitations of this
implementation in, say, docs.

[1]: https://unicode.org/reports/tr29/
     https://web.archive.org/web/20171001005125/https://unicode.org/reports/tr29/
Diffstat (limited to 'db/migrate/20161203164520_add_from_account_id_to_notifications.rb')
0 files changed, 0 insertions, 0 deletions