Unicode characters can sometimes have multiple representations that look identical but are technically different character sequences.
For example, the character “é” can be represented as a single code point (U+00E9) or as an “e” followed by a combining accent (U+0065 + U+0301).
This can lead to confusion, comparison issues, and unexpected behavior when working with JSON data.
Using normalized Unicode ensures consistent representation of text, which is important for key comparison, sorting, and searching operations.
When keys are properly normalized, operations like key lookups and equality checks will work as expected across different systems and platforms.
If your project intentionally uses a mix of JSON normalization forms, such as systems that manually manipulate strings, this rule might not reflect your project’s needs.
Consider refactoring to use safer string representations.