You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by nongli <gi...@git.apache.org> on 2016/04/01 20:28:24 UTC

[GitHub] spark pull request: [SPARK-12785][SQL] Add ColumnarBatch, an in me...

Github user nongli commented on the pull request:

    https://github.com/apache/spark/pull/10628#issuecomment-204503727
  
    This is not an architectural limitation. We have just not implemented support for it and there are only a few methods that would need to be implemented to support big endian. It would be great if someone in the community working on big endian hardware could do this. 
    
    We don't rely on the byte ordering of the platform. The function you are referring to, putIntLittleEndian, is converting the input, which is little endian, to whatever the machine's endianness is. It doesn't specify what the host endian has to be. This is used in cases where the data is encoded on disk in a canonical binary format. On a big endian host, this would have to byte swap but I think that's inevitable as the on disk data had to pick an endianness (this is the code that's not implemented right now).
    
    If you find places where spark requires a particular endianness, i'd consider that a bug.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org