You are viewing a plain text version of this content. The canonical link for it is here.
Posted to fop-dev@xmlgraphics.apache.org by Satoshi Ishigami <is...@victokai.co.jp> on 2002/01/28 00:35:15 UTC
i18n in TXTRenderer
Hi .
I hacked the TXTRenderer for i18n.
Currently the org.apache.fop.render.pcl.PCLStream class is
used as OutputStream in TXTRenderer. The add method in
PCLStream calss is as below:
public void add(String str) {
if (!doOutput)
return;
byte buff[] = new byte[str.length()];
int countr;
int len = str.length();
for (countr = 0; countr < len; countr++)
buff[countr] = (byte)str.charAt(countr);
try {
out.write(buff);
} catch (IOException e) {
// e.printStackTrace();
// e.printStackTrace(System.out);
throw new RuntimeException(e.toString());
}
}
I think that this algorithm is wrong for the character > 127.
This reason is that the literal length of char is 2 bytes and
the literal length of byte is 1 byte. To avoid this problem,
I think that the following algorithm is better than now.
public void add(String str) {
if (!doOutput) return;
try {
byte buff[] = str.getBytes("UTF-8");
out.write(buff);
} catch (IOException e) {
throw new RuntimeException(e.toString());
}
}
This algorithm may be not good for PCLRenderer because
I don't know whether the PCL printer supports the UTF-8
encoding or not.
However I think that the TXTRenderer could use the
multilingualable encoding because it is possible to include
some languages in a same single fo file.
Therere I consider that the TXTRenderer should not use the
PCLStream and had better use original OutputStream (such as
TXTStream).
Will my thought be wrong?
Best Regards.
---
Satoshi Ishigami VIC TOKAI CORPORATION
---------------------------------------------------------------------
To unsubscribe, e-mail: fop-dev-unsubscribe@xml.apache.org
For additional commands, email: fop-dev-help@xml.apache.org