You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@wicket.apache.org by "Martin Grigorov (JIRA)" <ji...@apache.org> on 2016/08/16 12:04:20 UTC

[jira] [Created] (WICKET-6227) CharSequenceResource calculates wrong length when there are unicode symbols

Martin Grigorov created WICKET-6227:
---------------------------------------

             Summary: CharSequenceResource calculates wrong length when there are unicode symbols
                 Key: WICKET-6227
                 URL: https://issues.apache.org/jira/browse/WICKET-6227
             Project: Wicket
          Issue Type: Bug
          Components: wicket
    Affects Versions: 7.4.0
            Reporter: Martin Grigorov


At the moment CharSequenceResource#getLength() looks like:

{code}
@Override
	protected Long getLength(CharSequence data)
	{
		return (long) data.length();
	}
{code}

This returns wrong results when there are unicode symbols like "\u1234".

It should use org.apache.wicket.util.string.Strings#lengthInBytes() instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)