You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2018/11/08 18:45:00 UTC

[jira] [Comment Edited] (SPARK-24421) sun.misc.Unsafe in JDK11

    [ https://issues.apache.org/jira/browse/SPARK-24421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16680169#comment-16680169 ] 

Sean Owen edited comment on SPARK-24421 at 11/8/18 6:44 PM:
------------------------------------------------------------

I've found that, actually, we can't even access clean() with reflection. See [https://stackoverflow.com/questions/41265266/how-to-solve-inaccessibleobjectexception-unable-to-make-member-accessible-m] for example. It works but only if the JVM is run with a flag like \{{--add-opens java.base/java.lang=ALL-UNNAMED}} . 

We do indeed have to write code that works on Java 8 and 11. We will have to continue to compile with Java 8; JVM won't run any code compiled for a later version (the old UnsupportedClassVerionError), so, no we can't compile with Java 11 and run on Java 8.

But compiling with Java 8 should be fine as Java 11 can read it; we just can't access Java 9+ classes without reflection. It's easy enough to resolve the _compile_ problems here, and yes, it will still all work on Java 8 like today. The problem is running on Java 11 right now.

I'm going to go ahead and open a pull request that fixes the compile issues for Java 11 and gets this to the point where it should run on Java 11 _if_ you set the flag above. That's progress at least.

The single issue here is this code in StorageUtils:
{code:java}
/**
 * Attempt to clean up a ByteBuffer if it is direct or memory-mapped. This uses an *unsafe* Sun
 * API that will cause errors if one attempts to read from the disposed buffer. However, neither
 * the bytes allocated to direct buffers nor file descriptors opened for memory-mapped buffers put
 * pressure on the garbage collector. Waiting for garbage collection may lead to the depletion of
 * off-heap memory or huge numbers of open files. There's unfortunately no standard API to
 * manually dispose of these kinds of buffers.
 */
def dispose(buffer: ByteBuffer): Unit = {
  if (buffer != null && buffer.isInstanceOf[MappedByteBuffer]) {
    logTrace(s"Disposing of $buffer")
    cleanDirectBuffer(buffer.asInstanceOf[DirectBuffer])
  }
}

private def cleanDirectBuffer(buffer: DirectBuffer): Unit = {
  val cleane= buffer.cleaner()
  if (cleaner != null) {
    cleaner.clean()
  }
}
{code}

I wonder how bad it is if this simply isn't accessed? Sounds bad. Not strictly fatal but bad. This means it all still _runs_ in Java 11, or should, even if this method can't be invoked.

But is there any reason to think this kind of low-level intervention in the ByteBuffer wouldn't be needed in Java 11 anyway? doubt it, but I wonder.


was (Author: srowen):
I've found that, actually, we can't even access clean() with reflection. See [https://stackoverflow.com/questions/41265266/how-to-solve-inaccessibleobjectexception-unable-to-make-member-accessible-m] for example. It works but only if the JVM is run with a flag like \{{--add-opens java.base/java.lang=ALL-UNNAMED}} . 

We do indeed have to write code that works on Java 8 and 11. We will have to continue to compile with Java 8; JVM won't run any code compiled for a later version (the old UnsupportedClassVerionError), so, no we can't compile with Java 11 and run on Java 8.

But compiling with Java 8 should be fine as Java 11 can read it; we just can't access Java 9+ classes without reflection. It's easy enough to resolve the _compile_ problems here, and yes, it will still all work on Java 8 like today. The problem is running on Java 11 right now.

I'm going to go ahead and open a pull request that fixes the compile issues for Java 11 and gets this to the point where it should run on Java 11 _if_ you set the flag above. That's progress at least.

The single issue here is this code in StorageUtils:
{code:java}
/**
 * Attempt to clean up a ByteBuffer if it is direct or memory-mapped. This uses an *unsafe* Sun
 * API that will cause errors if one attempts to read from the disposed buffer. However, neither
 * the bytes allocated to direct buffers nor file descriptors opened for memory-mapped buffers put
 * pressure on the garbage collector. Waiting for garbage collection may lead to the depletion of
 * off-heap memory or huge numbers of open files. There's unfortunately no standard API to
 * manually dispose of these kinds of buffers.
 */
def dispose(buffer: ByteBuffer): Unit = {
  if (buffer != null && buffer.isInstanceOf[MappedByteBuffer]) {
    logTrace(s"Disposing of $buffer")
    cleanDirectBuffer(buffer.asInstanceOf[DirectBuffer])
  }
}

private def cleanDirectBuffer(buffer: DirectBuffer): Unit = {
  val cleaner: AnyRef = buffer.cleaner()
  if (cleaner != null) {
    CLEAN_METHOD.invoke(cleaner)
  }
{code}

I wonder how bad it is if this simply isn't accessed? Sounds bad. Not strictly fatal but bad. This means it all still _runs_ in Java 11, or should, even if this method can't be invoked.

But is there any reason to think this kind of low-level intervention in the ByteBuffer wouldn't be needed in Java 11 anyway? doubt it, but I wonder.

> sun.misc.Unsafe in JDK11
> ------------------------
>
>                 Key: SPARK-24421
>                 URL: https://issues.apache.org/jira/browse/SPARK-24421
>             Project: Spark
>          Issue Type: Sub-task
>          Components: Build
>    Affects Versions: 2.3.0
>            Reporter: DB Tsai
>            Priority: Major
>
> Many internal APIs such as unsafe are encapsulated in JDK9+, see http://openjdk.java.net/jeps/260 for detail.
> To use Unsafe, we need to add *jdk.unsupported* to our code’s module declaration:
> {code:java}
> module java9unsafe {
>     requires jdk.unsupported;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org