This partially solves bpo-23894. (cherry picked from commit 0c4aca5)
| "bR'", 'bR"', "BR'", 'BR"', ): | ||
| "bR'", 'bR"', "BR'", 'BR"', | ||
| "rb'", 'rb"', "Rb'", 'Rb"', | ||
| "rB'", 'rB"', "RB'", 'RB"',): |
There was a problem hiding this comment.
_all_string_prefixes in Lib/tokenize.py computes similar prefixes (without ' or "), but generates it programatically. It's less error prone than listing these prefixes by hand. Plus, if "f" strings are added, the numbers go up dramatically.
You can't use it directly, because it doesn't support "ur" and variants, because they're not supported in 3.x. But maybe you can use a similar strategy.
This isn't a blocker.
There was a problem hiding this comment.
I agree this is not elegant. Sadly, the list is not exhaustive as the _litprefix regex shows. So the algorithm would need to be custom anyway. I'd count this as a possible future improvement.
This partially solves bpo-23894.
(cherry picked from commit 0c4aca5)