You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "xinrong-meng (via GitHub)" <gi...@apache.org> on 2024/01/18 00:45:42 UTC

[PR] [WIP] Basic support of SparkSession-based memory profiler [spark]

xinrong-meng opened a new pull request, #44775:
URL: https://github.com/apache/spark/pull/44775

   ### What changes were proposed in this pull request?
   
   Basic support of SparkSession-based memory profiler.
   
   An example is as shown below
   
   ![image](https://github.com/apache/spark/assets/47337188/fec2c17e-6c66-40be-8b9f-9b8bc229539a)
   
   ### Why are the changes needed?
   
   To support memory profiling in Spark Connect.
   
   ### Does this PR introduce _any_ user-facing change?
   
   Yes, the SparkSession-based memory profiler is available.
   
   ### How was this patch tested?
   
   TODO
   
   ### Was this patch authored or co-authored using generative AI tooling?
   
   No.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-46687][PYTHON][CONNECT] Basic support of SparkSession-based memory profiler [spark]

Posted by "ueshin (via GitHub)" <gi...@apache.org>.
ueshin commented on code in PR #44775:
URL: https://github.com/apache/spark/pull/44775#discussion_r1468255186


##########
python/pyspark/profiler.py:
##########
@@ -196,16 +197,41 @@ def add(
             for subcode in filter(inspect.iscode, code.co_consts):
                 self.add(subcode, toplevel_code=toplevel_code)
 
+    class CodeMapForUDFV2(CodeMap):
+        def add(
+            self,
+            code: Any,
+            toplevel_code: Optional[Any] = None,
+        ) -> None:
+            if code in self:
+                return
+
+            if toplevel_code is None:
+                toplevel_code = code
+                filename = code.co_filename
+                self._toplevel.append((filename, code))
+                self[code] = {}
+            else:
+                self[code] = self[toplevel_code]
+            for subcode in filter(inspect.iscode, code.co_consts):
+                self.add(subcode, toplevel_code=toplevel_code)
+
+        def items(self) -> Iterator[Tuple[str, Iterator[Tuple[int, Any]]]]:
+            """Iterate on the toplevel code blocks."""
+            for filename, code in self._toplevel:
+                measures = self[code]
+                if not measures:
+                    continue  # skip if no measurement
+                linenos = range(min(measures), max(measures) + 1)

Review Comment:
   We may want to delay to generate the full `linenos` until showing the results to reduce the intermediate data?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-46687][PYTHON][CONNECT] Basic support of SparkSession-based memory profiler [spark]

Posted by "xinrong-meng (via GitHub)" <gi...@apache.org>.
xinrong-meng commented on code in PR #44775:
URL: https://github.com/apache/spark/pull/44775#discussion_r1468269177


##########
python/pyspark/profiler.py:
##########
@@ -196,16 +197,41 @@ def add(
             for subcode in filter(inspect.iscode, code.co_consts):
                 self.add(subcode, toplevel_code=toplevel_code)
 
+    class CodeMapForUDFV2(CodeMap):
+        def add(
+            self,
+            code: Any,
+            toplevel_code: Optional[Any] = None,
+        ) -> None:
+            if code in self:
+                return
+
+            if toplevel_code is None:
+                toplevel_code = code
+                filename = code.co_filename
+                self._toplevel.append((filename, code))
+                self[code] = {}
+            else:
+                self[code] = self[toplevel_code]
+            for subcode in filter(inspect.iscode, code.co_consts):
+                self.add(subcode, toplevel_code=toplevel_code)
+
+        def items(self) -> Iterator[Tuple[str, Iterator[Tuple[int, Any]]]]:
+            """Iterate on the toplevel code blocks."""
+            for filename, code in self._toplevel:
+                measures = self[code]
+                if not measures:
+                    continue  # skip if no measurement
+                linenos = range(min(measures), max(measures) + 1)

Review Comment:
   Good idea! Updated.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-46687][PYTHON][CONNECT] Basic support of SparkSession-based memory profiler [spark]

Posted by "ueshin (via GitHub)" <gi...@apache.org>.
ueshin commented on PR #44775:
URL: https://github.com/apache/spark/pull/44775#issuecomment-1915570045

   Thanks! merging to master.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-46687][PYTHON][CONNECT] Basic support of SparkSession-based memory profiler [spark]

Posted by "xinrong-meng (via GitHub)" <gi...@apache.org>.
xinrong-meng commented on code in PR #44775:
URL: https://github.com/apache/spark/pull/44775#discussion_r1468269380


##########
python/pyspark/profiler.py:
##########
@@ -196,16 +197,41 @@ def add(
             for subcode in filter(inspect.iscode, code.co_consts):
                 self.add(subcode, toplevel_code=toplevel_code)
 
+    class CodeMapForUDFV2(CodeMap):
+        def add(
+            self,
+            code: Any,
+            toplevel_code: Optional[Any] = None,
+        ) -> None:
+            if code in self:
+                return
+
+            if toplevel_code is None:
+                toplevel_code = code
+                filename = code.co_filename
+                self._toplevel.append((filename, code))
+                self[code] = {}
+            else:
+                self[code] = self[toplevel_code]
+            for subcode in filter(inspect.iscode, code.co_consts):
+                self.add(subcode, toplevel_code=toplevel_code)
+
+        def items(self) -> Iterator[Tuple[str, Iterator[Tuple[int, Any]]]]:
+            """Iterate on the toplevel code blocks."""
+            for filename, code in self._toplevel:
+                measures = self[code]
+                if not measures:
+                    continue  # skip if no measurement
+                linenos = range(min(measures), max(measures) + 1)

Review Comment:
   ```
   ============================================================
   Profile of UDF<id=2>
   ============================================================
   Filename: /var/folders/h_/60n1p_5s7751jx1st4_sk0780000gp/T/ipykernel_69451/109011680.py
   
   Line #    Mem usage    Increment  Occurrences   Line Contents
   =============================================================
        8    147.7 MiB    147.7 MiB          20   @udf("string")
        9                                         def a(x):
       10    149.6 MiB      1.8 MiB          20     if TaskContext.get().partitionId() % 2 == 0:
       11     59.9 MiB      0.1 MiB           8       return str(x)
       12                                           else:
       13     89.9 MiB      0.1 MiB          12       return None
   ```
   tested on Jupyter.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-46687][PYTHON][CONNECT] Basic support of SparkSession-based memory profiler [spark]

Posted by "xinrong-meng (via GitHub)" <gi...@apache.org>.
xinrong-meng commented on PR #44775:
URL: https://github.com/apache/spark/pull/44775#issuecomment-1915611086

   Thank you @ueshin !


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-46687][PYTHON][CONNECT] Basic support of SparkSession-based memory profiler [spark]

Posted by "ueshin (via GitHub)" <gi...@apache.org>.
ueshin closed pull request #44775: [SPARK-46687][PYTHON][CONNECT] Basic support of SparkSession-based memory profiler
URL: https://github.com/apache/spark/pull/44775


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-46687][PYTHON][CONNECT] Basic support of SparkSession-based memory profiler [spark]

Posted by "xinrong-meng (via GitHub)" <gi...@apache.org>.
xinrong-meng commented on PR #44775:
URL: https://github.com/apache/spark/pull/44775#issuecomment-1910809804

   https://github.com/xinrong-meng/spark/actions/runs/7648782322/job/20842144027 failure is irrelevant to the PR changes. I will rebase master.
   @ueshin would you please review when you are free?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-46687][PYTHON][CONNECT] Basic support of SparkSession-based memory profiler [spark]

Posted by "xinrong-meng (via GitHub)" <gi...@apache.org>.
xinrong-meng commented on PR #44775:
URL: https://github.com/apache/spark/pull/44775#issuecomment-1904859525

   Failing test https://github.com/xinrong-meng/spark/actions/runs/7616103340/job/20742137944 is irrelevant.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org