You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@openwhisk.apache.org by GitBox <gi...@apache.org> on 2020/05/08 08:25:18 UTC

[GitHub] [openwhisk] ningyougang opened a new pull request #4871: Adjust prewarm container dynamically

ningyougang opened a new pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871


   <!--- Provide a concise summary of your changes in the Title -->
   
   ## Description
   <!--- Provide a detailed description of your changes. -->
   <!--- Include details of what problem you are solving and how your changes are tested. -->
   As we already know, when invoker starts up, will create some prewarmed containers in prewarmedpool in advance.
   
   If the prewarmed containers are not used for sometime, the prewarmed containers will exist forever.
   it may lead to `waste of resources`
   
   so there has need to provide some mechanism to save memory resources, e.g.
   * If the prewarmed containers are not used for sometime, should delete them automatically.
   * When cold start happened, we should supplement a prewarmed container which matched the kind/memory in runtime.
   
   Node: Current, i didn't add test cases, after some reviews, i will add it.
   
   ## Related issue and scope
   <!--- Please include a link to a related issue if there is one. -->
   https://github.com/apache/openwhisk/issues/4725#issuecomment-572824267
   
   ## My changes affect the following components
   <!--- Select below all system components are affected by your change. -->
   <!--- Enter an `x` in all applicable boxes. -->
   - [ ] API
   - [ ] Controller
   - [ ] Message Bus (e.g., Kafka)
   - [ ] Loadbalancer
   - [x] Invoker
   - [ ] Intrinsic actions (e.g., sequences, conductors)
   - [ ] Data stores (e.g., CouchDB)
   - [x] Tests
   - [ ] Deployment
   - [ ] CLI
   - [ ] General tooling
   - [ ] Documentation
   
   ## Types of changes
   <!--- What types of changes does your code introduce? Use `x` in all the boxes that apply: -->
   - [ ] Bug fix (generally a non-breaking change which closes an issue).
   - [x] Enhancement or new feature (adds new functionality).
   - [ ] Breaking change (a bug fix or enhancement which changes existing behavior).
   
   ## Checklist:
   <!--- Please review the points below which help you make sure you've covered all aspects of the change you're making. -->
   
   - [x] I signed an [Apache CLA](https://github.com/apache/openwhisk/blob/master/CONTRIBUTING.md).
   - [x] I reviewed the [style guides](https://github.com/apache/openwhisk/wiki/Contributing:-Git-guidelines#code-readiness) and followed the recommendations (Travis CI will check :).
   - [x] I added tests to cover my changes.
   - [ ] My changes require further changes to the documentation.
   - [ ] I updated the documentation where necessary.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r414346893



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -80,8 +86,59 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
   //periodically emit metrics (don't need to do this for each message!)
   context.system.scheduler.schedule(30.seconds, 10.seconds, self, EmitMetrics)
 
+  // Key is `kind,memory,time`, value is the number of cold Start in minute
+  var coldStartCount = immutable.Map.empty[String, Int]
+
   backfillPrewarms(true)
 
+  // check periodically every 1 minute, delete prewarmed container if unused for sometime
+  context.system.scheduler.schedule(10.seconds, 1.minute, self, DeleteUnusedPrewarmedContainer)
+
+  // check periodically for the cold start and create some increment containers automatically if activation >= threshold
+  context.system.scheduler.schedule(1.minute, 1.minute, self, SupplementPrewarmedContainer)
+
+  def deleteUnusedPrewarmedContainer(): Unit = {
+    prewarmConfig.foreach { config =>
+      val kind = config.exec.kind
+      val memory = config.memoryLimit
+      val ttlSeconds = config.ttl.toSeconds
+      val containers = prewarmedPool.filter { warmInfo =>
+        warmInfo match {
+          case (_, PreWarmedData(_, `kind`, `memory`, _)) => true
+          case _                                          => false
+        }
+      }
+      for ((container, data) <- containers) {
+        if (JDuration.between(data.lastUsed, Instant.now).compareTo(JDuration.ofSeconds(ttlSeconds)) > 0) {
+          // Don't recover a new one under this situation
+          container ! RemovePreWarmedContainer

Review comment:
       Yes, can work, updated accordingly




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] codecov-io edited a comment on pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
codecov-io edited a comment on pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#issuecomment-621566016


   # [Codecov](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=h1) Report
   > Merging [#4871](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=desc) into [master](https://codecov.io/gh/apache/openwhisk/commit/d495993a54d81ccbd5c28f0aa971f254722b1f9d&el=desc) will **decrease** coverage by `6.02%`.
   > The diff coverage is `92.50%`.
   
   [![Impacted file tree graph](https://codecov.io/gh/apache/openwhisk/pull/4871/graphs/tree.svg?width=650&height=150&src=pr&token=l0YmsiSAso)](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=tree)
   
   ```diff
   @@            Coverage Diff             @@
   ##           master    #4871      +/-   ##
   ==========================================
   - Coverage   83.32%   77.29%   -6.03%     
   ==========================================
     Files         200      200              
     Lines        9283     9365      +82     
     Branches      383      384       +1     
   ==========================================
   - Hits         7735     7239     -496     
   - Misses       1548     2126     +578     
   ```
   
   
   | [Impacted Files](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=tree) | Coverage Δ | |
   |---|---|---|
   | [...rg/apache/openwhisk/core/entity/ExecManifest.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29tbW9uL3NjYWxhL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvZW50aXR5L0V4ZWNNYW5pZmVzdC5zY2FsYQ==) | `94.56% <86.36%> (-2.66%)` | :arrow_down: |
   | [.../openwhisk/core/containerpool/ContainerProxy.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9pbnZva2VyL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvY29udGFpbmVycG9vbC9Db250YWluZXJQcm94eS5zY2FsYQ==) | `93.91% <86.95%> (+0.34%)` | :arrow_up: |
   | [...e/openwhisk/core/containerpool/ContainerPool.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9pbnZva2VyL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvY29udGFpbmVycG9vbC9Db250YWluZXJQb29sLnNjYWxh) | `96.68% <95.94%> (+1.02%)` | :arrow_up: |
   | [...pache/openwhisk/core/invoker/InvokerReactive.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9pbnZva2VyL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvaW52b2tlci9JbnZva2VyUmVhY3RpdmUuc2NhbGE=) | `79.64% <100.00%> (ø)` | |
   | [...core/database/cosmosdb/RxObservableImplicits.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29tbW9uL3NjYWxhL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvZGF0YWJhc2UvY29zbW9zZGIvUnhPYnNlcnZhYmxlSW1wbGljaXRzLnNjYWxh) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |
   | [...ore/database/cosmosdb/cache/CacheInvalidator.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9jb3Ntb3NkYi9jYWNoZS1pbnZhbGlkYXRvci9zcmMvbWFpbi9zY2FsYS9vcmcvYXBhY2hlL29wZW53aGlzay9jb3JlL2RhdGFiYXNlL2Nvc21vc2RiL2NhY2hlL0NhY2hlSW52YWxpZGF0b3Iuc2NhbGE=) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |
   | [...e/database/cosmosdb/cache/ChangeFeedConsumer.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9jb3Ntb3NkYi9jYWNoZS1pbnZhbGlkYXRvci9zcmMvbWFpbi9zY2FsYS9vcmcvYXBhY2hlL29wZW53aGlzay9jb3JlL2RhdGFiYXNlL2Nvc21vc2RiL2NhY2hlL0NoYW5nZUZlZWRDb25zdW1lci5zY2FsYQ==) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |
   | [...core/database/cosmosdb/CosmosDBArtifactStore.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29tbW9uL3NjYWxhL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvZGF0YWJhc2UvY29zbW9zZGIvQ29zbW9zREJBcnRpZmFjdFN0b3JlLnNjYWxh) | `0.00% <0.00%> (-96.23%)` | :arrow_down: |
   | [...sk/core/database/cosmosdb/CosmosDBViewMapper.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29tbW9uL3NjYWxhL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvZGF0YWJhc2UvY29zbW9zZGIvQ29zbW9zREJWaWV3TWFwcGVyLnNjYWxh) | `0.00% <0.00%> (-93.90%)` | :arrow_down: |
   | [...tabase/cosmosdb/cache/CacheInvalidatorConfig.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9jb3Ntb3NkYi9jYWNoZS1pbnZhbGlkYXRvci9zcmMvbWFpbi9zY2FsYS9vcmcvYXBhY2hlL29wZW53aGlzay9jb3JlL2RhdGFiYXNlL2Nvc21vc2RiL2NhY2hlL0NhY2hlSW52YWxpZGF0b3JDb25maWcuc2NhbGE=) | `0.00% <0.00%> (-92.31%)` | :arrow_down: |
   | ... and [15 more](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree-more) | |
   
   ------
   
   [Continue to review full report at Codecov](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=continue).
   > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
   > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
   > Powered by [Codecov](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=footer). Last update [d495993...215fee3](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r420413809



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -80,8 +87,58 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
   //periodically emit metrics (don't need to do this for each message!)
   context.system.scheduler.schedule(30.seconds, 10.seconds, self, EmitMetrics)
 
+  // Key is ColdStartKey, value is the number of cold Start in minute
+  var coldStartCount = immutable.Map.empty[ColdStartKey, Int]
+
   backfillPrewarms(true)
 
+  // check periodically every 1 minute, adjust prewarmed container(delete if unused for some time and create some increment containers)
+  context.system.scheduler.schedule(1.minute, 1.minute, self, AdjustPrewarmedContainer)
+
+  def adjustPrewarmedContainer(): Unit = {

Review comment:
       There is a lot of duplicated delicate logic - these need to be consolidated, I think.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] style95 commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
style95 commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r425534279



##########
File path: common/scala/src/main/scala/org/apache/openwhisk/core/entity/ExecManifest.scala
##########
@@ -344,9 +373,38 @@ protected[core] object ExecManifest {
 
   protected[entity] implicit val imageNameSerdes: RootJsonFormat[ImageName] = jsonFormat4(ImageName.apply)
 
-  protected[entity] implicit val stemCellSerdes: RootJsonFormat[StemCell] = {
+  protected[entity] implicit val ttlSerdes: RootJsonFormat[FiniteDuration] = new RootJsonFormat[FiniteDuration] {
+    override def write(finiteDuration: FiniteDuration): JsValue = JsString(finiteDuration.toString)
+
+    override def read(value: JsValue): FiniteDuration = value match {
+      case JsString(s) =>
+        val duration = Duration(s)
+        FiniteDuration(duration.length, duration.unit)
+      case _ =>
+        deserializationError("time unit not supported. Only milliseconds, seconds, minutes, hours, days are supported")
+    }
+  }
+
+  protected[entity] implicit val reactivePrewarmingConfigSerdes: RootJsonFormat[ReactivePrewarmingConfig] = jsonFormat5(
+    ReactivePrewarmingConfig.apply)
+
+  protected[entity] implicit val stemCellSerdes = new RootJsonFormat[StemCell] {
     import org.apache.openwhisk.core.entity.size.serdes
-    jsonFormat2(StemCell.apply)
+    val defaultSerdes = jsonFormat3(StemCell.apply)
+    override def read(value: JsValue): StemCell = {
+      val fields = value.asJsObject.fields
+      val initialCount =
+        fields
+          .get("initialCount")
+          .orElse(fields.get("count"))
+          .map(_.convertTo[Int])
+          .get
+      val memory = fields.get("memory").map(_.convertTo[ByteSize]).get

Review comment:
       I would avoid using `get` on `Option`.
   
   Since these configurations are required, I think we can do something simliar to this.
   ```suggestion
       override def read(value: JsValue): StemCell = {
         val fields = value.asJsObject.fields
         val initialCount: Option[Int] =
           fields
             .get("initialCount")
             .orElse(fields.get("count"))
             .map(_.convertTo[Int])
   
         val memory: Option[ByteSize] = fields.get("memory").map(_.convertTo[ByteSize])
         val config = fields.get("reactive").map(_.convertTo[ReactivePrewarmingConfig])
         
         (initialCount, memory) match {
           case (Some(c), Some(m)) => StemCell(c, m, config)
           case (Some(c), None)  => throw new IllegalArgumentException(s"blah blah: $c")
           case (None, Some(m))  => throw new IllegalArgumentException(s"blah blah: $m")
           case _ => throw new IllegalArgumentException(s"blah blah")
         }
   ```

##########
File path: ansible/roles/invoker/tasks/deploy.yml
##########
@@ -278,6 +278,7 @@
       "CONFIG_whisk_invoker_https_keystorePassword": "{{ invoker.ssl.keystore.password }}"
       "CONFIG_whisk_invoker_https_keystoreFlavor": "{{ invoker.ssl.storeFlavor }}"
       "CONFIG_whisk_invoker_https_clientAuth": "{{ invoker.ssl.clientAuth }}"
+      "CONFIG_whisk_containerPool_prewarmExpiredCheckPeriod": "{{ container_pool_prewarmExpiredCheckPeriod | default('1 minute') }}"

Review comment:
       It's a bit picky but how about changing the name to a more intuitive one such as `container_pool_prewarm_expirationCheckInterval`?
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r425014361



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -303,26 +326,73 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
   }
 
   /** Install prewarm containers up to the configured requirements for each kind/memory combination. */
-  def backfillPrewarms(init: Boolean) = {
+  def adjustPrewarmedContainer(init: Boolean, scheduled: Boolean): Unit = {
     prewarmConfig.foreach { config =>
+      // Delete unused prewarmed container until minCount is reached
       val kind = config.exec.kind
       val memory = config.memoryLimit
-      val currentCount = prewarmedPool.count {
-        case (_, PreWarmedData(_, `kind`, `memory`, _)) => true //done starting
-        case _                                          => false //started but not finished starting
+
+      val runningCount = prewarmedPool.count {
+        //done starting, and not expired
+        case (_, p @ PreWarmedData(_, `kind`, `memory`, _, _)) if !p.isExpired() => true
+        //started but not finished starting (or expired)
+        case _ => false
       }
       val startingCount = prewarmStartingPool.count(p => p._2._1 == kind && p._2._2 == memory)
-      val containerCount = currentCount + startingCount
-      if (containerCount < config.count) {
+      val currentCount = runningCount + startingCount
+      //determine how many are needed
+      val desiredCount: Int =
+        if (init) config.initialCount
+        else {
+          if (scheduled) {
+            //scheduled/reactive config backfill
+            config.reactive
+              .map(c => getReactiveCold(c, kind, memory).getOrElse(c.minCount)) //reactive -> desired is either cold start driven, or minCount
+              .getOrElse(config.initialCount) //not reactive -> desired is always initial count
+          } else {
+            //normal backfill after removal - make sure at least minCount or initialCount is started
+            config.reactive.map(_.minCount).getOrElse(config.initialCount)
+          }
+        }
+
+      //remove expired
+      if (scheduled) {
+        config.reactive.foreach { reactiveValue =>
+          prewarmedPool
+            .filter { warmInfo =>
+              warmInfo match {
+                case (_, p @ PreWarmedData(_, `kind`, `memory`, _, _)) if p.isExpired() => true
+                case _                                                                  => false
+              }
+            }
+            .drop(reactiveValue.minCount) //keep minCount even if expired
+            .map(_._1 ! Remove)
+        }
+      }
+
+      if (currentCount < desiredCount) {
         logging.info(
           this,
-          s"found ${currentCount} started and ${startingCount} starting; ${if (init) "initing" else "backfilling"} ${config.count - containerCount} pre-warms to desired count: ${config.count} for kind:${config.exec.kind} mem:${config.memoryLimit.toString}")(
+          s"found ${currentCount} started and ${startingCount} starting; ${if (init) "initing" else "backfilling"} ${desiredCount - currentCount} pre-warms to desired count: ${desiredCount} for kind:${config.exec.kind} mem:${config.memoryLimit.toString}")(
           TransactionId.invokerWarmup)
-        (containerCount until config.count).foreach { _ =>
-          prewarmContainer(config.exec, config.memoryLimit)
+        (currentCount until desiredCount).foreach { _ =>
+          prewarmContainer(config.exec, config.memoryLimit, config.reactive.map(_.ttl))
         }
       }
     }
+    if (scheduled) {
+      //clear coldStartCounts each time scheduled event is processed to reset counts
+      coldStartCount = immutable.Map.empty[ColdStartKey, Int]

Review comment:
       Updated accordingly

##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -303,26 +326,73 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
   }
 
   /** Install prewarm containers up to the configured requirements for each kind/memory combination. */
-  def backfillPrewarms(init: Boolean) = {
+  def adjustPrewarmedContainer(init: Boolean, scheduled: Boolean): Unit = {
     prewarmConfig.foreach { config =>
+      // Delete unused prewarmed container until minCount is reached
       val kind = config.exec.kind
       val memory = config.memoryLimit
-      val currentCount = prewarmedPool.count {
-        case (_, PreWarmedData(_, `kind`, `memory`, _)) => true //done starting
-        case _                                          => false //started but not finished starting
+
+      val runningCount = prewarmedPool.count {
+        //done starting, and not expired
+        case (_, p @ PreWarmedData(_, `kind`, `memory`, _, _)) if !p.isExpired() => true
+        //started but not finished starting (or expired)
+        case _ => false
       }
       val startingCount = prewarmStartingPool.count(p => p._2._1 == kind && p._2._2 == memory)
-      val containerCount = currentCount + startingCount
-      if (containerCount < config.count) {
+      val currentCount = runningCount + startingCount
+      //determine how many are needed
+      val desiredCount: Int =
+        if (init) config.initialCount
+        else {
+          if (scheduled) {
+            //scheduled/reactive config backfill
+            config.reactive
+              .map(c => getReactiveCold(c, kind, memory).getOrElse(c.minCount)) //reactive -> desired is either cold start driven, or minCount
+              .getOrElse(config.initialCount) //not reactive -> desired is always initial count
+          } else {
+            //normal backfill after removal - make sure at least minCount or initialCount is started
+            config.reactive.map(_.minCount).getOrElse(config.initialCount)
+          }
+        }
+
+      //remove expired
+      if (scheduled) {
+        config.reactive.foreach { reactiveValue =>
+          prewarmedPool
+            .filter { warmInfo =>
+              warmInfo match {
+                case (_, p @ PreWarmedData(_, `kind`, `memory`, _, _)) if p.isExpired() => true
+                case _                                                                  => false
+              }
+            }
+            .drop(reactiveValue.minCount) //keep minCount even if expired
+            .map(_._1 ! Remove)

Review comment:
       Updated accordingly as well.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r413648969



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -340,6 +418,47 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
     newContainer ! Start(exec, memoryLimit)
   }
 
+  /** Create a new prewarm container if currentCount doesn't reach maxCount */
+  def prewarmContainerIfpossible(kind: String, memoryLimit: ByteSize, count: Int): Unit = {
+    prewarmConfig
+      .filter { config =>
+        kind == config.exec.kind && memoryLimit == config.memoryLimit
+      }
+      .foreach { config =>
+        val currentCount = prewarmedPool.count {
+          case (_, PreWarmedData(_, `kind`, `memoryLimit`, _)) => true //done starting
+          case _                                               => false //started but not finished starting
+        }
+        val startingCount = prewarmStartingPool.count(p => p._2._1 == kind && p._2._2 == memoryLimit)
+        val containerCount = currentCount + startingCount
+        if (containerCount < config.maxCount) {
+          val createNumber = if (config.maxCount - containerCount > count) count else config.maxCount - containerCount
+          1 to createNumber foreach { _ =>
+            prewarmContainer(config.exec, config.memoryLimit)
+          }
+        }
+      }
+  }
+
+  /** statistics the cold start */
+  def countColdStart(kind: String, memoryLimit: ByteSize): Unit = {

Review comment:
       Already  changed to `incrementColdStartCount`

##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -340,6 +418,47 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
     newContainer ! Start(exec, memoryLimit)
   }
 
+  /** Create a new prewarm container if currentCount doesn't reach maxCount */
+  def prewarmContainerIfpossible(kind: String, memoryLimit: ByteSize, count: Int): Unit = {
+    prewarmConfig
+      .filter { config =>
+        kind == config.exec.kind && memoryLimit == config.memoryLimit
+      }
+      .foreach { config =>
+        val currentCount = prewarmedPool.count {
+          case (_, PreWarmedData(_, `kind`, `memoryLimit`, _)) => true //done starting
+          case _                                               => false //started but not finished starting
+        }
+        val startingCount = prewarmStartingPool.count(p => p._2._1 == kind && p._2._2 == memoryLimit)
+        val containerCount = currentCount + startingCount
+        if (containerCount < config.maxCount) {
+          val createNumber = if (config.maxCount - containerCount > count) count else config.maxCount - containerCount
+          1 to createNumber foreach { _ =>
+            prewarmContainer(config.exec, config.memoryLimit)
+          }
+        }
+      }
+  }
+
+  /** statistics the cold start */
+  def countColdStart(kind: String, memoryLimit: ByteSize): Unit = {
+    prewarmConfig
+      .filter { config =>
+        kind == config.exec.kind && memoryLimit == config.memoryLimit
+      }
+      .foreach { _ =>
+        val time = new SimpleDateFormat("yyyy-MM-dd-HH:mm").format(Calendar.getInstance().getTime)
+        val key = s"${kind},${memoryLimit.toMB},${time}"

Review comment:
       Changed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] rabbah commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
rabbah commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r415797752



##########
File path: ansible/files/runtimes.json
##########
@@ -57,8 +57,15 @@
                 },
                 "stemCells": [
                     {
-                        "count": 2,
-                        "memory": "256 MB"
+                        "initialCount": 2,

Review comment:
       renaming the field would be a breaking change - better to support both as a way to allow backward compatibility. get(initialCount).orElse(count) for example.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r414034495



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -80,8 +87,58 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
   //periodically emit metrics (don't need to do this for each message!)
   context.system.scheduler.schedule(30.seconds, 10.seconds, self, EmitMetrics)
 
+  // Key is ColdStartKey, value is the number of cold Start in minute
+  var coldStartCount = immutable.Map.empty[ColdStartKey, Int]
+
   backfillPrewarms(true)
 
+  // check periodically every 1 minute, adjust prewarmed container(delete if unused for some time and create some increment containers)
+  context.system.scheduler.schedule(1.minute, 1.minute, self, AdjustPrewarmedContainer)
+
+  def adjustPrewarmedContainer(): Unit = {

Review comment:
       Can you DRY up the code in `adjustPrewarmedContainer` and `backfillPrewarms`, possibly using the same function for both?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r427007633



##########
File path: tests/src/test/scala/org/apache/openwhisk/core/containerpool/test/ContainerPoolTests.scala
##########
@@ -753,14 +758,211 @@ class ContainerPoolTests
     containers(1).expectMsg(Start(exec, memoryLimit))
 
     //removing 2 prewarm containers will start 2 containers via backfill
-    containers(0).send(pool, ContainerRemoved)
-    containers(1).send(pool, ContainerRemoved)
+    containers(0).send(pool, ContainerRemoved(true))
+    containers(1).send(pool, ContainerRemoved(true))
     containers(2).expectMsg(Start(exec, memoryLimit))
     containers(3).expectMsg(Start(exec, memoryLimit))
     //make sure extra prewarms are not started
     containers(4).expectNoMessage(100.milliseconds)
     containers(5).expectNoMessage(100.milliseconds)
   }
+
+  it should "remove the prewarmed container after ttl time if unused" in {
+    val (containers, factory) = testContainers(2)
+    val feed = TestProbe()
+
+    val reactive: Option[ReactivePrewarmingConfig] = Some(ReactivePrewarmingConfig(0, 2, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(2, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    stream.reset()
+
+    // Make sure prewarmed containers can be deleted from prewarmedPool due to unused
+    Thread.sleep(3.seconds.toMillis)
+
+    pool ! AdjustPrewarmedContainer
+    containers(0).expectMsg(Remove)
+    containers(1).expectMsg(Remove)
+    containers(0).send(pool, ContainerRemoved(false))
+    containers(1).send(pool, ContainerRemoved(false))
+
+    // Make sure prewarmed containers are deleted
+    Thread.sleep(2.seconds.toMillis)
+
+    stream.toString should include("prewarmed container is deleted")
+    stream.reset()
+  }
+
+  it should "adjust prewarm container run well without reactive config" in {
+    val (containers, factory) = testContainers(4)
+    val feed = TestProbe()
+
+    stream.reset()
+    val initialCount = 2
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(initialCount, exec, memoryLimit))))
+    containers(0).expectMsg(Start(exec, memoryLimit))
+    containers(1).expectMsg(Start(exec, memoryLimit))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind)))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind)))
+
+    // when invoker starts, include 0 prewarm container at the very beginning
+    stream.toString should include(s"currentCount: 0 prewarmed container")
+
+    // the desiredCount should equal with initialCount when invoker starts
+    stream.toString should include(s"needs ${initialCount} desired prewarmed container")
+
+    stream.reset()
+    pool ! AdjustPrewarmedContainer
+
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)
+
+    // Because already supplemented the prewarmed container, so currentCount should equal with initialCount
+    stream.toString should include(s"currentCount: ${initialCount} prewarmed container")
+  }
+
+  it should "adjust prewarm container run well with reactive config" in {
+    val (containers, factory) = testContainers(15)
+    val feed = TestProbe()
+
+    stream.reset()
+    val minCount = 0
+    val initialCount = 2
+    val maxCount = 4
+    val deadline: Option[Deadline] = Some(ttl.fromNow)
+    val reactive: Option[ReactivePrewarmingConfig] =
+      Some(ReactivePrewarmingConfig(minCount, maxCount, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 8),
+            feed.ref,
+            List(PrewarmingConfig(initialCount, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+
+    // when invoker starts, include 0 prewarm container at the very beginning
+    stream.toString should include(s"currentCount: 0 prewarmed container")
+
+    // the desiredCount should equal with initialCount when invoker starts
+    stream.toString should include(s"needs ${initialCount} desired prewarmed container")
+
+    stream.reset()
+
+    // Make sure the created prewarmed containers are expired
+    Thread.sleep(ttl.toMillis)
+    pool ! AdjustPrewarmedContainer
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)
+
+    containers(0).expectMsg(Remove)
+    containers(1).expectMsg(Remove)
+    containers(0).send(pool, ContainerRemoved(false))
+    containers(1).send(pool, ContainerRemoved(false))
+
+    // Because already supplemented the prewarmed container, so currentCount should equal with initialCount
+    stream.toString should include(s"currentCount: ${initialCount} prewarmed container")
+
+    // the desiredCount should equal with minCount because cold start didn't happen
+    stream.toString should include(s"needs ${minCount} desired prewarmed container")
+    // Previously created prewarmed containers should be removed
+    stream.toString should include(s"removed ${initialCount} expired prewarmed container")
+
+    stream.reset()
+    val action = ExecutableWhiskAction(
+      EntityPath("actionSpace"),
+      EntityName("actionName"),
+      exec,
+      limits = ActionLimits(memory = MemoryLimit(memoryLimit)))
+    val run = createRunMessage(action, invocationNamespace)
+    // 2 code start happened
+    pool ! run
+    pool ! run
+    containers(2).expectMsg(run)
+    containers(3).expectMsg(run)
+
+    pool ! AdjustPrewarmedContainer
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)
+
+    // Because already removed expired prewarmed containrs, so currentCount should equal with 0
+    stream.toString should include(s"currentCount: 0 prewarmed container")
+    // the desiredCount should equal with 2 due to cold start happened
+    stream.toString should include(s"needs 2 desired prewarmed container")
+
+    containers(4).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(5).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(4).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+    containers(5).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+
+    // Make sure the created prewarmed containers are expired
+    stream.reset()
+    Thread.sleep(ttl.toMillis)
+    pool ! AdjustPrewarmedContainer

Review comment:
       Updated accordingly.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] codecov-commenter commented on pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
codecov-commenter commented on pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#issuecomment-630579281


   # [Codecov](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=h1) Report
   > Merging [#4871](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=desc) into [master](https://codecov.io/gh/apache/openwhisk/commit/a4122fdd58a2c0ce6eab09fdc3a8678a75af83b2&el=desc) will **decrease** coverage by `5.94%`.
   > The diff coverage is `93.12%`.
   
   [![Impacted file tree graph](https://codecov.io/gh/apache/openwhisk/pull/4871/graphs/tree.svg?width=650&height=150&src=pr&token=l0YmsiSAso)](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=tree)
   
   ```diff
   @@            Coverage Diff             @@
   ##           master    #4871      +/-   ##
   ==========================================
   - Coverage   83.19%   77.25%   -5.95%     
   ==========================================
     Files         201      201              
     Lines        9351     9441      +90     
     Branches      397      396       -1     
   ==========================================
   - Hits         7780     7294     -486     
   - Misses       1571     2147     +576     
   ```
   
   
   | [Impacted Files](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=tree) | Coverage Δ | |
   |---|---|---|
   | [...rg/apache/openwhisk/core/entity/ExecManifest.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29tbW9uL3NjYWxhL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvZW50aXR5L0V4ZWNNYW5pZmVzdC5zY2FsYQ==) | `91.83% <78.57%> (-5.39%)` | :arrow_down: |
   | [.../openwhisk/core/containerpool/ContainerProxy.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9pbnZva2VyL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvY29udGFpbmVycG9vbC9Db250YWluZXJQcm94eS5zY2FsYQ==) | `93.85% <85.00%> (+0.30%)` | :arrow_up: |
   | [...in/scala/org/apache/openwhisk/common/Logging.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29tbW9uL3NjYWxhL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvbW1vbi9Mb2dnaW5nLnNjYWxh) | `76.41% <100.00%> (-8.41%)` | :arrow_down: |
   | [...penwhisk/core/containerpool/ContainerFactory.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29tbW9uL3NjYWxhL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvY29udGFpbmVycG9vbC9Db250YWluZXJGYWN0b3J5LnNjYWxh) | `87.50% <100.00%> (+0.83%)` | :arrow_up: |
   | [...e/openwhisk/core/containerpool/ContainerPool.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9pbnZva2VyL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvY29udGFpbmVycG9vbC9Db250YWluZXJQb29sLnNjYWxh) | `97.45% <100.00%> (+1.80%)` | :arrow_up: |
   | [...pache/openwhisk/core/invoker/InvokerReactive.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9pbnZva2VyL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvaW52b2tlci9JbnZva2VyUmVhY3RpdmUuc2NhbGE=) | `79.64% <100.00%> (ø)` | |
   | [...core/database/cosmosdb/RxObservableImplicits.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29tbW9uL3NjYWxhL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvZGF0YWJhc2UvY29zbW9zZGIvUnhPYnNlcnZhYmxlSW1wbGljaXRzLnNjYWxh) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |
   | [...ore/database/cosmosdb/cache/CacheInvalidator.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9jb3Ntb3NkYi9jYWNoZS1pbnZhbGlkYXRvci9zcmMvbWFpbi9zY2FsYS9vcmcvYXBhY2hlL29wZW53aGlzay9jb3JlL2RhdGFiYXNlL2Nvc21vc2RiL2NhY2hlL0NhY2hlSW52YWxpZGF0b3Iuc2NhbGE=) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |
   | [...e/database/cosmosdb/cache/ChangeFeedConsumer.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9jb3Ntb3NkYi9jYWNoZS1pbnZhbGlkYXRvci9zcmMvbWFpbi9zY2FsYS9vcmcvYXBhY2hlL29wZW53aGlzay9jb3JlL2RhdGFiYXNlL2Nvc21vc2RiL2NhY2hlL0NoYW5nZUZlZWRDb25zdW1lci5zY2FsYQ==) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |
   | [...core/database/cosmosdb/CosmosDBArtifactStore.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29tbW9uL3NjYWxhL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvZGF0YWJhc2UvY29zbW9zZGIvQ29zbW9zREJBcnRpZmFjdFN0b3JlLnNjYWxh) | `0.00% <0.00%> (-96.23%)` | :arrow_down: |
   | ... and [19 more](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree-more) | |
   
   ------
   
   [Continue to review full report at Codecov](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=continue).
   > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
   > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
   > Powered by [Codecov](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=footer). Last update [a4122fd...5ae5fa4](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r425494727



##########
File path: tests/src/test/scala/org/apache/openwhisk/core/containerpool/test/ContainerPoolTests.scala
##########
@@ -753,14 +757,102 @@ class ContainerPoolTests
     containers(1).expectMsg(Start(exec, memoryLimit))
 
     //removing 2 prewarm containers will start 2 containers via backfill
-    containers(0).send(pool, ContainerRemoved)
-    containers(1).send(pool, ContainerRemoved)
+    containers(0).send(pool, ContainerRemoved(true))
+    containers(1).send(pool, ContainerRemoved(true))
     containers(2).expectMsg(Start(exec, memoryLimit))
     containers(3).expectMsg(Start(exec, memoryLimit))
     //make sure extra prewarms are not started
     containers(4).expectNoMessage(100.milliseconds)
     containers(5).expectNoMessage(100.milliseconds)
   }
+
+  it should "remove the prewarmed container after ttl time if unused" in {
+    val (containers, factory) = testContainers(2)
+    val feed = TestProbe()
+
+    val reactive: Option[ReactivePrewarmingConfig] = Some(ReactivePrewarmingConfig(0, 2, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(2, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    stream.reset()
+
+    // Make sure prewarmed containers can be deleted from prewarmedPool due to unused
+    Thread.sleep(3.seconds.toMillis)
+
+    pool ! AdjustPrewarmedContainer
+    containers(0).expectMsg(Remove)
+    containers(1).expectMsg(Remove)
+    containers(0).send(pool, ContainerRemoved(false))
+    containers(1).send(pool, ContainerRemoved(false))
+
+    // Make sure prewarmed containers are deleted
+    Thread.sleep(2.seconds.toMillis)
+
+    stream.toString should include("prewarmed container is deleted")
+    stream.reset()
+  }
+
+  it should "supplement prewarmed container when doesn't have enough container to handle activation" in {
+    val (containers, factory) = testContainers(6)
+    val feed = TestProbe()
+
+    val deadline: Option[Deadline] = Some(ttl.fromNow)
+    val reactive: Option[ReactivePrewarmingConfig] = Some(ReactivePrewarmingConfig(0, 2, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(2, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+    stream.reset()
+
+    // Make sure prewarmed containers can be deleted from prewarmedPool due to unused
+    Thread.sleep(3.seconds.toMillis)
+
+    pool ! AdjustPrewarmedContainer

Review comment:
       Passing the map parameters is not a problem - see `ContainerPool.schedule`.
   Function results can be changed from `Unit` to (Seq[ActorRef], Seq[(String,ByteSize,Int)]) representing (prewarms to delete, prewarms to create). My concern is that the sleep and message handling logic is very complicated, and may miss some edge cases in this critical function, so isolating it in a better testable way would give more confidence. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#issuecomment-619239158


   BTW I think the `takePrewarmContainer` needs enhancement as well, so that the "oldest" prewarm is always used - otherwise, it could end up where a random but recent prewarm is taken, and then all the others suddenly expire and are removed, leaving only the min. But if we always take the oldest, then there will be more left when expiring ones are removed.
   e.g. (assuming PrewarmedData.expires is a Deadline as commented elsewhere) 
   ```
       val now = Deadline.now
       prewarmedPool
         .toSeq
         .sortBy(_._2.expires.getOrElse(now))
         .find {
         ...
   ```
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r420423754



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerProxy.scala
##########
@@ -279,6 +282,10 @@ class ContainerProxy(factory: (TransactionId,
   when(Uninitialized) {
     // pre warm a container (creates a stem cell container)
     case Event(job: Start, _) =>
+      val deadline = job.ttl match {
+        case Some(value) => Some(FiniteDuration(value.toMillis, TimeUnit.MILLISECONDS).fromNow)

Review comment:
       with above change from Duration to FiniteDuration, you can now just `case Some(value) => Some(value.fromNow)`, or better just remove the match and below you can 
   `PreWarmCompleted(PreWarmedData(container, job.exec.kind, job.memoryLimit, expires = job.ttl.map(_.fromNow))))`




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#issuecomment-619744141


   Regrading `BTW I think the takePrewarmContainer needs enhancement as well, so that the "oldest" prewarm is always used - otherwise`
   
   ok, you are very careful


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r417018419



##########
File path: ansible/files/runtimes.json
##########
@@ -57,8 +57,15 @@
                 },
                 "stemCells": [
                     {
-                        "count": 2,
-                        "memory": "256 MB"
+                        "initialCount": 2,

Review comment:
       @rabbah ,i think we can change `count` to `initialCount` directly,
   
   And your said problem: `renaming the field would be a breaking change`, i think it is not a problem, because this pr's all changes in invoker side only, after applied this pr to new invoker, has no bad influences on old invokers and other components(e.g. controller).
   
   BTW, regrading `xxx.get(initialCount).orElse(count)`
   What's `xxx` here? and `.get` is wroted where?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r424706640



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -303,26 +326,73 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
   }
 
   /** Install prewarm containers up to the configured requirements for each kind/memory combination. */
-  def backfillPrewarms(init: Boolean) = {
+  def adjustPrewarmedContainer(init: Boolean, scheduled: Boolean): Unit = {
     prewarmConfig.foreach { config =>
+      // Delete unused prewarmed container until minCount is reached
       val kind = config.exec.kind
       val memory = config.memoryLimit
-      val currentCount = prewarmedPool.count {
-        case (_, PreWarmedData(_, `kind`, `memory`, _)) => true //done starting
-        case _                                          => false //started but not finished starting
+
+      val runningCount = prewarmedPool.count {
+        //done starting, and not expired
+        case (_, p @ PreWarmedData(_, `kind`, `memory`, _, _)) if !p.isExpired() => true
+        //started but not finished starting (or expired)
+        case _ => false
       }
       val startingCount = prewarmStartingPool.count(p => p._2._1 == kind && p._2._2 == memory)
-      val containerCount = currentCount + startingCount
-      if (containerCount < config.count) {
+      val currentCount = runningCount + startingCount
+      //determine how many are needed
+      val desiredCount: Int =
+        if (init) config.initialCount
+        else {
+          if (scheduled) {
+            //scheduled/reactive config backfill
+            config.reactive
+              .map(c => getReactiveCold(c, kind, memory).getOrElse(c.minCount)) //reactive -> desired is either cold start driven, or minCount
+              .getOrElse(config.initialCount) //not reactive -> desired is always initial count
+          } else {
+            //normal backfill after removal - make sure at least minCount or initialCount is started
+            config.reactive.map(_.minCount).getOrElse(config.initialCount)
+          }
+        }
+
+      //remove expired
+      if (scheduled) {
+        config.reactive.foreach { reactiveValue =>
+          prewarmedPool
+            .filter { warmInfo =>
+              warmInfo match {
+                case (_, p @ PreWarmedData(_, `kind`, `memory`, _, _)) if p.isExpired() => true
+                case _                                                                  => false
+              }
+            }
+            .drop(reactiveValue.minCount) //keep minCount even if expired
+            .map(_._1 ! Remove)
+        }
+      }
+
+      if (currentCount < desiredCount) {
         logging.info(
           this,
-          s"found ${currentCount} started and ${startingCount} starting; ${if (init) "initing" else "backfilling"} ${config.count - containerCount} pre-warms to desired count: ${config.count} for kind:${config.exec.kind} mem:${config.memoryLimit.toString}")(
+          s"found ${currentCount} started and ${startingCount} starting; ${if (init) "initing" else "backfilling"} ${desiredCount - currentCount} pre-warms to desired count: ${desiredCount} for kind:${config.exec.kind} mem:${config.memoryLimit.toString}")(
           TransactionId.invokerWarmup)
-        (containerCount until config.count).foreach { _ =>
-          prewarmContainer(config.exec, config.memoryLimit)
+        (currentCount until desiredCount).foreach { _ =>
+          prewarmContainer(config.exec, config.memoryLimit, config.reactive.map(_.ttl))
         }
       }
     }
+    if (scheduled) {
+      //clear coldStartCounts each time scheduled event is processed to reset counts
+      coldStartCount = immutable.Map.empty[ColdStartKey, Int]

Review comment:
       before clearing this map, we should log metric counter (with tags for kind+memory) so that prewarm "misses" can be tracked over time




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r425563654



##########
File path: tests/src/test/scala/org/apache/openwhisk/core/entity/test/ExecManifestTests.scala
##########
@@ -336,25 +339,144 @@ class ExecManifestTests extends FlatSpec with WskActorSystem with StreamLogging
     mf shouldBe {
       Runtimes(
         Set(
-          RuntimeFamily("nodef", Set(js6, js8)),
+          RuntimeFamily("nodef", Set(js10, js8)),
           RuntimeFamily("pythonf", Set(py)),
           RuntimeFamily("swiftf", Set(sw)),
           RuntimeFamily("phpf", Set(ph))),
         Set.empty,
         None)
     }
 
-    def stemCellFactory(m: RuntimeManifest, cells: List[StemCell]) = cells.map { c =>
-      (m.kind, m.image, c.count, c.memory)
+    mf.stemcells.flatMap {
+      case (m, cells) =>
+        cells.map { c =>
+          (m.kind, m.image, c.initialCount, c.memory)
+        }
+    }.toList should contain theSameElementsAs List(
+      (js10.kind, js10.image, 1, 128.MB),
+      (js8.kind, js8.image, 1, 128.MB),
+      (js8.kind, js8.image, 1, 256.MB),
+      (py.kind, py.image, 2, 256.MB))
+  }
+
+  it should "parse manifest with reactive from JSON string" in {

Review comment:
       Add another test case for `parse manifest with reactive from JSON string`




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r414039190



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -80,8 +87,58 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
   //periodically emit metrics (don't need to do this for each message!)
   context.system.scheduler.schedule(30.seconds, 10.seconds, self, EmitMetrics)
 
+  // Key is ColdStartKey, value is the number of cold Start in minute
+  var coldStartCount = immutable.Map.empty[ColdStartKey, Int]
+
   backfillPrewarms(true)
 
+  // check periodically every 1 minute, adjust prewarmed container(delete if unused for some time and create some increment containers)
+  context.system.scheduler.schedule(1.minute, 1.minute, self, AdjustPrewarmedContainer)
+
+  def adjustPrewarmedContainer(): Unit = {
+    prewarmConfig.foreach { config =>
+      // Delete unused prewarmed container until minCount is reached
+      val kind = config.exec.kind
+      val memory = config.memoryLimit
+      val ttlSeconds = config.ttl.toSeconds
+      val minCount = config.minCount
+      val containers = prewarmedPool.filter { warmInfo =>
+        warmInfo match {
+          case (_, PreWarmedData(_, `kind`, `memory`, _)) => true
+          case _                                          => false
+        }
+      }
+      var currentCount = containers.size
+      for ((container, data) <- containers) {
+        if (currentCount > minCount && JDuration
+              .between(data.lastUsed, Instant.now)
+              .compareTo(JDuration.ofSeconds(ttlSeconds)) > 0) {
+          container ! RemovePreWarmedContainer
+          currentCount -= 1
+        }
+      }
+
+      // supplement some prewarmed container if cold start happened
+      val calendar = Calendar.getInstance()
+      calendar.add(Calendar.MINUTE, -1)
+      val lastDate = new SimpleDateFormat("yyyy-MM-dd-HH:mm").format(calendar.getTime)
+      val coldStartKey = ColdStartKey(kind, memory.toMB, lastDate)
+      coldStartCount.get(coldStartKey) match {
+        case Some(value) =>
+          if (value >= config.threshold) {
+            val createdCount = value / config.increment
+            val count = if (createdCount > 0) createdCount else 1
+            prewarmContainerIfpossible(kind, memory, count)
+          } else {
+            // at lease create 1 prewarmed container
+            prewarmContainerIfpossible(kind, memory, 1)
+          }
+        case None =>
+      }
+      coldStartCount = coldStartCount - coldStartKey

Review comment:
       Should this just reset the map to empty? Otherwise this map may grow without bound?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r413658626



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerProxy.scala
##########
@@ -375,6 +377,8 @@ class ContainerProxy(factory: (TransactionId,
 
     case Event(Remove, data: PreWarmedData) => destroyContainer(data)
 
+    case Event(RemovePreWarmedContainer, data: PreWarmedData) => destoryPreWarmedContainer(data)

Review comment:
       Updated accordingly

##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -307,18 +384,19 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
     prewarmConfig.foreach { config =>
       val kind = config.exec.kind
       val memory = config.memoryLimit
+      val initialCount = config.initialCount
       val currentCount = prewarmedPool.count {
         case (_, PreWarmedData(_, `kind`, `memory`, _)) => true //done starting
         case _                                          => false //started but not finished starting
       }
       val startingCount = prewarmStartingPool.count(p => p._2._1 == kind && p._2._2 == memory)
       val containerCount = currentCount + startingCount
-      if (containerCount < config.count) {
+      if (containerCount < initialCount) {

Review comment:
       Updated accordingly




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r425488753



##########
File path: tests/src/test/scala/org/apache/openwhisk/core/containerpool/test/ContainerPoolTests.scala
##########
@@ -753,14 +758,211 @@ class ContainerPoolTests
     containers(1).expectMsg(Start(exec, memoryLimit))
 
     //removing 2 prewarm containers will start 2 containers via backfill
-    containers(0).send(pool, ContainerRemoved)
-    containers(1).send(pool, ContainerRemoved)
+    containers(0).send(pool, ContainerRemoved(true))
+    containers(1).send(pool, ContainerRemoved(true))
     containers(2).expectMsg(Start(exec, memoryLimit))
     containers(3).expectMsg(Start(exec, memoryLimit))
     //make sure extra prewarms are not started
     containers(4).expectNoMessage(100.milliseconds)
     containers(5).expectNoMessage(100.milliseconds)
   }
+
+  it should "remove the prewarmed container after ttl time if unused" in {
+    val (containers, factory) = testContainers(2)
+    val feed = TestProbe()
+
+    val reactive: Option[ReactivePrewarmingConfig] = Some(ReactivePrewarmingConfig(0, 2, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(2, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    stream.reset()
+
+    // Make sure prewarmed containers can be deleted from prewarmedPool due to unused
+    Thread.sleep(3.seconds.toMillis)

Review comment:
       should sleep an amount based on ttl (e.g. ttl + 1.millis)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r414348120



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -340,6 +418,45 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
     newContainer ! Start(exec, memoryLimit)
   }
 
+  /** Create a new prewarm container if currentCount doesn't reach maxCount */
+  def prewarmContainerIfpossible(kind: String, memoryLimit: ByteSize, count: Int): Unit = {
+    prewarmConfig
+      .filter { config =>
+        kind == config.exec.kind && memoryLimit == config.memoryLimit
+      }
+      .foreach { config =>
+        val currentCount = prewarmedPool.count {
+          case (_, PreWarmedData(_, `kind`, `memoryLimit`, _)) => true //done starting
+          case _                                               => false //started but not finished starting
+        }
+        val startingCount = prewarmStartingPool.count(p => p._2._1 == kind && p._2._2 == memoryLimit)
+        val containerCount = currentCount + startingCount
+        if (containerCount < config.maxCount) {
+          val createNumber = if (config.maxCount - containerCount > count) count else config.maxCount - containerCount
+          logging.info(this, s"add ${createNumber} [kind: ${kind} memory: ${memoryLimit}] prewarmed containers")
+          1 to createNumber foreach { _ =>
+            prewarmContainer(config.exec, config.memoryLimit)
+          }
+        }
+      }
+  }
+
+  /** statistics the cold start */
+  def incrementColdStartCount(kind: String, memoryLimit: ByteSize): Unit = {
+    prewarmConfig
+      .filter { config =>
+        kind == config.exec.kind && memoryLimit == config.memoryLimit
+      }
+      .foreach { _ =>
+        val time = new SimpleDateFormat("yyyy-MM-dd-HH:mm").format(Calendar.getInstance().getTime)

Review comment:
       Yes, you are right, because our schedule intervel is every minute, so can remove it




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r425547819



##########
File path: tests/src/test/scala/org/apache/openwhisk/core/containerpool/test/ContainerPoolTests.scala
##########
@@ -753,14 +757,102 @@ class ContainerPoolTests
     containers(1).expectMsg(Start(exec, memoryLimit))
 
     //removing 2 prewarm containers will start 2 containers via backfill
-    containers(0).send(pool, ContainerRemoved)
-    containers(1).send(pool, ContainerRemoved)
+    containers(0).send(pool, ContainerRemoved(true))
+    containers(1).send(pool, ContainerRemoved(true))
     containers(2).expectMsg(Start(exec, memoryLimit))
     containers(3).expectMsg(Start(exec, memoryLimit))
     //make sure extra prewarms are not started
     containers(4).expectNoMessage(100.milliseconds)
     containers(5).expectNoMessage(100.milliseconds)
   }
+
+  it should "remove the prewarmed container after ttl time if unused" in {
+    val (containers, factory) = testContainers(2)
+    val feed = TestProbe()
+
+    val reactive: Option[ReactivePrewarmingConfig] = Some(ReactivePrewarmingConfig(0, 2, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(2, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    stream.reset()
+
+    // Make sure prewarmed containers can be deleted from prewarmedPool due to unused
+    Thread.sleep(3.seconds.toMillis)
+
+    pool ! AdjustPrewarmedContainer
+    containers(0).expectMsg(Remove)
+    containers(1).expectMsg(Remove)
+    containers(0).send(pool, ContainerRemoved(false))
+    containers(1).send(pool, ContainerRemoved(false))
+
+    // Make sure prewarmed containers are deleted
+    Thread.sleep(2.seconds.toMillis)
+
+    stream.toString should include("prewarmed container is deleted")
+    stream.reset()
+  }
+
+  it should "supplement prewarmed container when doesn't have enough container to handle activation" in {
+    val (containers, factory) = testContainers(6)
+    val feed = TestProbe()
+
+    val deadline: Option[Deadline] = Some(ttl.fromNow)
+    val reactive: Option[ReactivePrewarmingConfig] = Some(ReactivePrewarmingConfig(0, 2, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(2, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+    stream.reset()
+
+    // Make sure prewarmed containers can be deleted from prewarmedPool due to unused
+    Thread.sleep(3.seconds.toMillis)
+
+    pool ! AdjustPrewarmedContainer

Review comment:
       hm...
   Can you check this in you local env? 
   i found that not only the problem of `passing the map parameter`, still has other problem(it seems not easy to describe clearly), e.g. calls between variables and methods are too complicated




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r426941964



##########
File path: tests/src/test/scala/org/apache/openwhisk/core/containerpool/test/ContainerPoolTests.scala
##########
@@ -753,14 +757,102 @@ class ContainerPoolTests
     containers(1).expectMsg(Start(exec, memoryLimit))
 
     //removing 2 prewarm containers will start 2 containers via backfill
-    containers(0).send(pool, ContainerRemoved)
-    containers(1).send(pool, ContainerRemoved)
+    containers(0).send(pool, ContainerRemoved(true))
+    containers(1).send(pool, ContainerRemoved(true))
     containers(2).expectMsg(Start(exec, memoryLimit))
     containers(3).expectMsg(Start(exec, memoryLimit))
     //make sure extra prewarms are not started
     containers(4).expectNoMessage(100.milliseconds)
     containers(5).expectNoMessage(100.milliseconds)
   }
+
+  it should "remove the prewarmed container after ttl time if unused" in {
+    val (containers, factory) = testContainers(2)
+    val feed = TestProbe()
+
+    val reactive: Option[ReactivePrewarmingConfig] = Some(ReactivePrewarmingConfig(0, 2, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(2, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    stream.reset()
+
+    // Make sure prewarmed containers can be deleted from prewarmedPool due to unused
+    Thread.sleep(3.seconds.toMillis)
+
+    pool ! AdjustPrewarmedContainer
+    containers(0).expectMsg(Remove)
+    containers(1).expectMsg(Remove)
+    containers(0).send(pool, ContainerRemoved(false))
+    containers(1).send(pool, ContainerRemoved(false))
+
+    // Make sure prewarmed containers are deleted
+    Thread.sleep(2.seconds.toMillis)
+
+    stream.toString should include("prewarmed container is deleted")
+    stream.reset()
+  }
+
+  it should "supplement prewarmed container when doesn't have enough container to handle activation" in {
+    val (containers, factory) = testContainers(6)
+    val feed = TestProbe()
+
+    val deadline: Option[Deadline] = Some(ttl.fromNow)
+    val reactive: Option[ReactivePrewarmingConfig] = Some(ReactivePrewarmingConfig(0, 2, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(2, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+    stream.reset()
+
+    // Make sure prewarmed containers can be deleted from prewarmedPool due to unused
+    Thread.sleep(3.seconds.toMillis)
+
+    pool ! AdjustPrewarmedContainer

Review comment:
       Here is some code - not all to be moved to object, but the main logic to determine a) how many prewarms to create and b) which prewarms to remove:
   ```
     def adjustPrewarmedContainer(init: Boolean, scheduled: Boolean): Unit = {
       //fill in missing prewarms
       ContainerPool
         .increasePrewarms(init, scheduled, coldStartCount, prewarmConfig, prewarmedPool, prewarmStartingPool)
         .foreach { c =>
           (1 to c._2).foreach { _ =>
             val config = c._1
             prewarmContainer(config.exec, config.memoryLimit, config.reactive.map(_.ttl))
           }
         }
       if (scheduled) {
         //on scheduled time, remove expired prewarms
         ContainerPool.removeExpired(prewarmConfig, prewarmedPool).foreach { a =>
           a ! Remove
           prewarmedPool = prewarmedPool - a
         }
         //on scheduled time, emit cold start counter metric with memory + kind
         coldStartCount foreach { coldStart =>
           val coldStartKey = coldStart._1
           MetricEmitter.emitCounterMetric(
             LoggingMarkers.CONTAINER_POOL_PREWARM_COLDSTART(coldStartKey.memory.toString, coldStartKey.kind))
         }
         //   then clear coldStartCounts each time scheduled event is processed to reset counts
         coldStartCount = immutable.Map.empty[ColdStartKey, Int]
       }
     }
   ```
   Then in object add:
   ```
     def removeExpired(prewarmConfig: List[PrewarmingConfig], prewarmedPool: Map[ActorRef, PreWarmedData])(
       implicit logging: Logging): List[ActorRef] = {
       prewarmConfig.flatMap { config =>
         val kind = config.exec.kind
         val memory = config.memoryLimit
         config.reactive
           .map { _ =>
             val expiredPrewarmedContainer = prewarmedPool
               .filter { warmInfo =>
                 warmInfo match {
                   case (_, p @ PreWarmedData(_, `kind`, `memory`, _, _)) if p.isExpired() => true
                   case _                                                                  => false
                 }
               }
             // emit expired container counter metric with memory + kind
             MetricEmitter.emitCounterMetric(LoggingMarkers.CONTAINER_POOL_PREWARM_EXPIRED(memory.toString, kind))
             logging.info(
               this,
               s"[kind: ${kind} memory: ${memory.toString}] removed ${expiredPrewarmedContainer.size} expired prewarmed container")
             expiredPrewarmedContainer.keys
           }
           .getOrElse(List.empty)
       }
     }
     def increasePrewarms(
       init: Boolean,
       scheduled: Boolean,
       coldStartCount: Map[ColdStartKey, Int],
       prewarmConfig: List[PrewarmingConfig],
       prewarmedPool: Map[ActorRef, PreWarmedData],
       prewarmStartingPool: Map[ActorRef, (String, ByteSize)])(implicit logging: Logging): Map[PrewarmingConfig, Int] = {
       prewarmConfig.map { config =>
         val kind = config.exec.kind
         val memory = config.memoryLimit
   
         val runningCount = prewarmedPool.count {
           // done starting, and not expired
           case (_, p @ PreWarmedData(_, `kind`, `memory`, _, _)) if !p.isExpired() => true
           // started but not finished starting (or expired)
           case _ => false
         }
         val startingCount = prewarmStartingPool.count(p => p._2._1 == kind && p._2._2 == memory)
         val currentCount = runningCount + startingCount
   
         logging.info(
           this,
           s"[kind: ${kind} memory: ${memory.toString}] currentCount: ${currentCount} prewarmed container")
   
         // determine how many are needed
         val desiredCount: Int =
           if (init) config.initialCount
           else {
             if (scheduled) {
               // scheduled/reactive config backfill
               config.reactive
                 .map(c => getReactiveCold(coldStartCount, c, kind, memory).getOrElse(c.minCount)) //reactive -> desired is either cold start driven, or minCount
                 .getOrElse(config.initialCount) //not reactive -> desired is always initial count
             } else {
               // normal backfill after removal - make sure at least minCount or initialCount is started
               config.reactive.map(_.minCount).getOrElse(config.initialCount)
             }
           }
   
         logging.info(
           this,
           s"found ${currentCount} started and ${startingCount} starting; ${if (init) "initing" else "backfilling"} ${desiredCount - currentCount} pre-warms to desired count: ${desiredCount} for kind:${config.exec.kind} mem:${config.memoryLimit.toString}")(
           TransactionId.invokerWarmup)
         (config, desiredCount)
       }.toMap
     }
     def getReactiveCold(coldStartCount: Map[ColdStartKey, Int],
                         config: ReactivePrewarmingConfig,
                         kind: String,
                         memory: ByteSize): Option[Int] = {
       coldStartCount.get(ColdStartKey(kind, memory)).map { value =>
         // Let's assume that threshold is `2`, increment is `1` in runtimes.json
         // if cold start number in previous minute is `2`, requireCount is `2/2 * 1 = 1`
         // if cold start number in previous minute is `4`, requireCount is `4/2 * 1 = 2`
         math.min(math.max(config.minCount, (value / config.threshold) * config.increment), config.maxCount)
       }
     }
   ```
   WDYT?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r424707261



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -303,26 +326,73 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
   }
 
   /** Install prewarm containers up to the configured requirements for each kind/memory combination. */
-  def backfillPrewarms(init: Boolean) = {
+  def adjustPrewarmedContainer(init: Boolean, scheduled: Boolean): Unit = {
     prewarmConfig.foreach { config =>
+      // Delete unused prewarmed container until minCount is reached
       val kind = config.exec.kind
       val memory = config.memoryLimit
-      val currentCount = prewarmedPool.count {
-        case (_, PreWarmedData(_, `kind`, `memory`, _)) => true //done starting
-        case _                                          => false //started but not finished starting
+
+      val runningCount = prewarmedPool.count {
+        //done starting, and not expired
+        case (_, p @ PreWarmedData(_, `kind`, `memory`, _, _)) if !p.isExpired() => true
+        //started but not finished starting (or expired)
+        case _ => false
       }
       val startingCount = prewarmStartingPool.count(p => p._2._1 == kind && p._2._2 == memory)
-      val containerCount = currentCount + startingCount
-      if (containerCount < config.count) {
+      val currentCount = runningCount + startingCount
+      //determine how many are needed
+      val desiredCount: Int =
+        if (init) config.initialCount
+        else {
+          if (scheduled) {
+            //scheduled/reactive config backfill
+            config.reactive
+              .map(c => getReactiveCold(c, kind, memory).getOrElse(c.minCount)) //reactive -> desired is either cold start driven, or minCount
+              .getOrElse(config.initialCount) //not reactive -> desired is always initial count
+          } else {
+            //normal backfill after removal - make sure at least minCount or initialCount is started
+            config.reactive.map(_.minCount).getOrElse(config.initialCount)
+          }
+        }
+
+      //remove expired
+      if (scheduled) {
+        config.reactive.foreach { reactiveValue =>
+          prewarmedPool
+            .filter { warmInfo =>
+              warmInfo match {
+                case (_, p @ PreWarmedData(_, `kind`, `memory`, _, _)) if p.isExpired() => true
+                case _                                                                  => false
+              }
+            }
+            .drop(reactiveValue.minCount) //keep minCount even if expired
+            .map(_._1 ! Remove)

Review comment:
       before removable we should log metric counter (with tags for kind+memory) so that prewarm "excess" can be tracked over time.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r425545038



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -303,26 +326,94 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
   }
 
   /** Install prewarm containers up to the configured requirements for each kind/memory combination. */
-  def backfillPrewarms(init: Boolean) = {
+  def adjustPrewarmedContainer(init: Boolean, scheduled: Boolean): Unit = {
     prewarmConfig.foreach { config =>
+      // Delete unused prewarmed container until minCount is reached
       val kind = config.exec.kind
       val memory = config.memoryLimit
-      val currentCount = prewarmedPool.count {
-        case (_, PreWarmedData(_, `kind`, `memory`, _)) => true //done starting
-        case _                                          => false //started but not finished starting
+
+      val runningCount = prewarmedPool.count {
+        // done starting, and not expired
+        case (_, p @ PreWarmedData(_, `kind`, `memory`, _, _)) if !p.isExpired() => true
+        // started but not finished starting (or expired)
+        case _ => false
       }
       val startingCount = prewarmStartingPool.count(p => p._2._1 == kind && p._2._2 == memory)
-      val containerCount = currentCount + startingCount
-      if (containerCount < config.count) {
+      val currentCount = runningCount + startingCount
+
+      logging.info(
+        this,
+        s"[kind: ${kind} memory: ${memory.toString}] currentCount: ${currentCount} prewarmed container")
+
+      // determine how many are needed
+      val desiredCount: Int =
+        if (init) config.initialCount
+        else {
+          if (scheduled) {
+            // scheduled/reactive config backfill
+            config.reactive
+              .map(c => getReactiveCold(c, kind, memory).getOrElse(c.minCount)) //reactive -> desired is either cold start driven, or minCount
+              .getOrElse(config.initialCount) //not reactive -> desired is always initial count
+          } else {
+            // normal backfill after removal - make sure at least minCount or initialCount is started
+            config.reactive.map(_.minCount).getOrElse(config.initialCount)
+          }
+        }
+
+      logging.info(
+        this,
+        s"[kind: ${kind} memory: ${memory.toString}] needs ${desiredCount} desired prewarmed container")
+
+      // remove expired
+      config.reactive.foreach { _ =>
+        val expiredPrewarmedContainer = prewarmedPool
+          .filter { warmInfo =>
+            warmInfo match {
+              case (_, p @ PreWarmedData(_, `kind`, `memory`, _, _)) if p.isExpired() => true
+              case _                                                                  => false
+            }
+          }
+        // emit expired container counter metric with memory + kind
+        val removedCount = expiredPrewarmedContainer.size
+        MetricEmitter.emitHistogramMetric(LoggingMarkers.CONTAINER_POOL_EXPIRED(memory.toString, kind), removedCount)
         logging.info(
           this,
-          s"found ${currentCount} started and ${startingCount} starting; ${if (init) "initing" else "backfilling"} ${config.count - containerCount} pre-warms to desired count: ${config.count} for kind:${config.exec.kind} mem:${config.memoryLimit.toString}")(
+          s"[kind: ${kind} memory: ${memory.toString}] removed ${removedCount} expired prewarmed container")
+        expiredPrewarmedContainer.map(_._1 ! Remove)
+      }
+
+      if (currentCount < desiredCount) {
+        logging.info(
+          this,
+          s"found ${currentCount} started and ${startingCount} starting; ${if (init) "initing" else "backfilling"} ${desiredCount - currentCount} pre-warms to desired count: ${desiredCount} for kind:${config.exec.kind} mem:${config.memoryLimit.toString}")(
           TransactionId.invokerWarmup)
-        (containerCount until config.count).foreach { _ =>
-          prewarmContainer(config.exec, config.memoryLimit)
+        (currentCount until desiredCount).foreach { _ =>
+          prewarmContainer(config.exec, config.memoryLimit, config.reactive.map(_.ttl))
         }
       }
     }
+    if (scheduled) {
+      // emit cold start counter metric with memory + kind
+      coldStartCount foreach { coldStart =>
+        val coldStartKey = coldStart._1
+        val coldStartValue = coldStart._2
+        MetricEmitter.emitHistogramMetric(
+          LoggingMarkers.CONTAINER_POOL_COLDSTART(coldStartKey.memory.toString, coldStartKey.kind),
+          coldStartValue)

Review comment:
       updated accordingly




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r420409417



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -80,8 +84,51 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
   //periodically emit metrics (don't need to do this for each message!)
   context.system.scheduler.schedule(30.seconds, 10.seconds, self, EmitMetrics)
 
+  // Key is ColdStartKey, value is the number of cold Start in minute
+  var coldStartCount = immutable.Map.empty[ColdStartKey, Int]
+
   backfillPrewarms(true)
 
+  // check periodically every 1 minute, adjust prewarmed container(delete if unused for some time and create some increment containers)
+  context.system.scheduler.schedule(1.minute, 1.minute, self, AdjustPrewarmedContainer)
+
+  def adjustPrewarmedContainer(): Unit = {
+    prewarmConfig.foreach { config =>
+      // Delete unused prewarmed container until minCount is reached
+      val kind = config.exec.kind
+      val memory = config.memoryLimit
+      config.reactive match {

Review comment:
       use `config.reactive.foreach()` so you don't need to match `case None`




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r425485710



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -303,26 +326,94 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
   }
 
   /** Install prewarm containers up to the configured requirements for each kind/memory combination. */
-  def backfillPrewarms(init: Boolean) = {
+  def adjustPrewarmedContainer(init: Boolean, scheduled: Boolean): Unit = {
     prewarmConfig.foreach { config =>
+      // Delete unused prewarmed container until minCount is reached
       val kind = config.exec.kind
       val memory = config.memoryLimit
-      val currentCount = prewarmedPool.count {
-        case (_, PreWarmedData(_, `kind`, `memory`, _)) => true //done starting
-        case _                                          => false //started but not finished starting
+
+      val runningCount = prewarmedPool.count {
+        // done starting, and not expired
+        case (_, p @ PreWarmedData(_, `kind`, `memory`, _, _)) if !p.isExpired() => true
+        // started but not finished starting (or expired)
+        case _ => false
       }
       val startingCount = prewarmStartingPool.count(p => p._2._1 == kind && p._2._2 == memory)
-      val containerCount = currentCount + startingCount
-      if (containerCount < config.count) {
+      val currentCount = runningCount + startingCount
+
+      logging.info(
+        this,
+        s"[kind: ${kind} memory: ${memory.toString}] currentCount: ${currentCount} prewarmed container")
+
+      // determine how many are needed
+      val desiredCount: Int =
+        if (init) config.initialCount
+        else {
+          if (scheduled) {
+            // scheduled/reactive config backfill
+            config.reactive
+              .map(c => getReactiveCold(c, kind, memory).getOrElse(c.minCount)) //reactive -> desired is either cold start driven, or minCount
+              .getOrElse(config.initialCount) //not reactive -> desired is always initial count
+          } else {
+            // normal backfill after removal - make sure at least minCount or initialCount is started
+            config.reactive.map(_.minCount).getOrElse(config.initialCount)
+          }
+        }
+
+      logging.info(
+        this,
+        s"[kind: ${kind} memory: ${memory.toString}] needs ${desiredCount} desired prewarmed container")
+
+      // remove expired
+      config.reactive.foreach { _ =>
+        val expiredPrewarmedContainer = prewarmedPool
+          .filter { warmInfo =>
+            warmInfo match {
+              case (_, p @ PreWarmedData(_, `kind`, `memory`, _, _)) if p.isExpired() => true
+              case _                                                                  => false
+            }
+          }
+        // emit expired container counter metric with memory + kind
+        val removedCount = expiredPrewarmedContainer.size
+        MetricEmitter.emitHistogramMetric(LoggingMarkers.CONTAINER_POOL_EXPIRED(memory.toString, kind), removedCount)

Review comment:
       I would name this metric "prewarmExpired"




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r414848710



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -80,8 +85,55 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
   //periodically emit metrics (don't need to do this for each message!)
   context.system.scheduler.schedule(30.seconds, 10.seconds, self, EmitMetrics)
 
+  // Key is ColdStartKey, value is the number of cold Start in minute
+  var coldStartCount = immutable.Map.empty[ColdStartKey, Int]
+
   backfillPrewarms(true)
 
+  // check periodically every 1 minute, adjust prewarmed container(delete if unused for some time and create some increment containers)
+  context.system.scheduler.schedule(1.minute, 1.minute, self, AdjustPrewarmedContainer)
+
+  def adjustPrewarmedContainer(): Unit = {
+    prewarmConfig.foreach { config =>
+      // Delete unused prewarmed container until minCount is reached
+      val kind = config.exec.kind
+      val memory = config.memoryLimit
+      val ttlSeconds = config.ttl.toSeconds
+      val minCount = config.minCount
+      val containers = prewarmedPool.filter { warmInfo =>
+        warmInfo match {
+          case (_, PreWarmedData(_, `kind`, `memory`, _)) => true
+          case _                                          => false
+        }
+      }
+      var currentCount = containers.size
+      for ((container, data) <- containers) {
+        if (currentCount > minCount && JDuration
+              .between(data.created, Instant.now)
+              .compareTo(JDuration.ofSeconds(ttlSeconds)) > 0) {
+          container ! Remove
+          currentCount -= 1
+        }
+      }
+
+      // supplement some prewarmed container if cold start happened
+      val coldStartKey = ColdStartKey(kind, memory)
+      coldStartCount.get(coldStartKey) match {
+        case Some(value) =>
+          if (value >= config.threshold) {
+            val createdCount = value / config.increment
+            val count = if (createdCount > 0) createdCount else 1
+            prewarmContainerIfpossible(kind, memory, count)
+          } else {
+            // at lease create 1 prewarmed container
+            prewarmContainerIfpossible(kind, memory, 1)

Review comment:
       I don't think this `else` should be here - it is not clear from configuration that when value==5 and threshold==5, you get 5/increment created, but if value== 1,2,3,or4, you get 1 created. It should just always use `math.max(1, value/increment)` 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r413656602



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -80,8 +86,59 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
   //periodically emit metrics (don't need to do this for each message!)
   context.system.scheduler.schedule(30.seconds, 10.seconds, self, EmitMetrics)
 
+  // Key is `kind,memory,time`, value is the number of cold Start in minute
+  var coldStartCount = immutable.Map.empty[String, Int]
+
   backfillPrewarms(true)
 
+  // check periodically every 1 minute, delete prewarmed container if unused for sometime
+  context.system.scheduler.schedule(10.seconds, 1.minute, self, DeleteUnusedPrewarmedContainer)
+
+  // check periodically for the cold start and create some increment containers automatically if activation >= threshold
+  context.system.scheduler.schedule(1.minute, 1.minute, self, SupplementPrewarmedContainer)
+
+  def deleteUnusedPrewarmedContainer(): Unit = {
+    prewarmConfig.foreach { config =>
+      val kind = config.exec.kind
+      val memory = config.memoryLimit
+      val ttlSeconds = config.ttl.toSeconds
+      val containers = prewarmedPool.filter { warmInfo =>
+        warmInfo match {
+          case (_, PreWarmedData(_, `kind`, `memory`, _)) => true
+          case _                                          => false
+        }
+      }
+      for ((container, data) <- containers) {
+        if (JDuration.between(data.lastUsed, Instant.now).compareTo(JDuration.ofSeconds(ttlSeconds)) > 0) {
+          // Don't recover a new one under this situation
+          container ! RemovePreWarmedContainer

Review comment:
       If use `Remove` message to remove unused prewarmed container, this lead to `another new prewarmed created after delete the unused prewarmed container`
   
   Logic as below
    firstly, codes will be executed here: https://github.com/apache/openwhisk/blob/master/core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerProxy.scala#L376, then after executed `context.parent ! ContainerRemoved` in `def destroyContainer`, codes will be executed here: https://github.com/apache/openwhisk/blob/master/core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala#L272
   
   So i used another message `RemovePreWarmedContainer` to distinguish them, and has another benefit, after deleted the prewarmed container, can print the prewarmed container's detail info via `PreWarmedContainerRemoved(prewarmedData: PreWarmedData)`'s `prewarmedData` field
   
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r425547197



##########
File path: common/scala/src/main/scala/org/apache/openwhisk/core/entity/ExecManifest.scala
##########
@@ -344,9 +373,38 @@ protected[core] object ExecManifest {
 
   protected[entity] implicit val imageNameSerdes: RootJsonFormat[ImageName] = jsonFormat4(ImageName.apply)
 
-  protected[entity] implicit val stemCellSerdes: RootJsonFormat[StemCell] = {
+  protected[entity] implicit val ttlSerdes: RootJsonFormat[FiniteDuration] = new RootJsonFormat[FiniteDuration] {
+    override def write(finiteDuration: FiniteDuration): JsValue = JsString(finiteDuration.toString)
+
+    override def read(value: JsValue): FiniteDuration = value match {
+      case JsString(s) =>
+        val duration = Duration(s)
+        FiniteDuration(duration.length, duration.unit)
+      case _ =>
+        deserializationError("time unit not supported. Only milliseconds, seconds, minutes, hours, days are supported")
+    }
+  }
+
+  protected[entity] implicit val reactivePrewarmingConfigSerdes: RootJsonFormat[ReactivePrewarmingConfig] = jsonFormat5(
+    ReactivePrewarmingConfig.apply)
+
+  protected[entity] implicit val stemCellSerdes = new RootJsonFormat[StemCell] {
     import org.apache.openwhisk.core.entity.size.serdes
-    jsonFormat2(StemCell.apply)
+    val defaultSerdes = jsonFormat3(StemCell.apply)
+    override def read(value: JsValue): StemCell = {
+      val fields = value.asJsObject.fields
+      val initialCount =
+        fields
+          .get("initialCount")
+          .orElse(fields.get("count"))
+          .map(_.convertTo[Int])
+          .get
+      val memory = fields.get("memory").map(_.convertTo[ByteSize]).get

Review comment:
       updated accordingly.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r414857018



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -80,8 +85,55 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
   //periodically emit metrics (don't need to do this for each message!)
   context.system.scheduler.schedule(30.seconds, 10.seconds, self, EmitMetrics)
 
+  // Key is ColdStartKey, value is the number of cold Start in minute
+  var coldStartCount = immutable.Map.empty[ColdStartKey, Int]
+
   backfillPrewarms(true)
 
+  // check periodically every 1 minute, adjust prewarmed container(delete if unused for some time and create some increment containers)
+  context.system.scheduler.schedule(1.minute, 1.minute, self, AdjustPrewarmedContainer)
+
+  def adjustPrewarmedContainer(): Unit = {
+    prewarmConfig.foreach { config =>
+      // Delete unused prewarmed container until minCount is reached
+      val kind = config.exec.kind
+      val memory = config.memoryLimit
+      val ttlSeconds = config.ttl.toSeconds
+      val minCount = config.minCount
+      val containers = prewarmedPool.filter { warmInfo =>

Review comment:
       suggest:
   - change `PrewarmData.created:Instant` to `PrewarmData.expires: Option[Deadline]`
   - add to PrewarmedData:`def isExpired(): Boolean = expires.exists(_.isOverdue())` 
   - simplifying this removal logic to 
   ```
         prewarmedPool
           .filter { warmInfo =>
             warmInfo match {
               case (_, p @ PreWarmedData(_, `kind`, `memory`, _, _)) if p.isExpired() => true
               case _                                                                  => false
             }
           }
           .drop(config.minCount)
           .foreach(_._1 ! Remove)
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r425545320



##########
File path: tests/src/test/scala/org/apache/openwhisk/core/containerpool/test/ContainerPoolTests.scala
##########
@@ -753,14 +758,211 @@ class ContainerPoolTests
     containers(1).expectMsg(Start(exec, memoryLimit))
 
     //removing 2 prewarm containers will start 2 containers via backfill
-    containers(0).send(pool, ContainerRemoved)
-    containers(1).send(pool, ContainerRemoved)
+    containers(0).send(pool, ContainerRemoved(true))
+    containers(1).send(pool, ContainerRemoved(true))
     containers(2).expectMsg(Start(exec, memoryLimit))
     containers(3).expectMsg(Start(exec, memoryLimit))
     //make sure extra prewarms are not started
     containers(4).expectNoMessage(100.milliseconds)
     containers(5).expectNoMessage(100.milliseconds)
   }
+
+  it should "remove the prewarmed container after ttl time if unused" in {
+    val (containers, factory) = testContainers(2)
+    val feed = TestProbe()
+
+    val reactive: Option[ReactivePrewarmingConfig] = Some(ReactivePrewarmingConfig(0, 2, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(2, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    stream.reset()
+
+    // Make sure prewarmed containers can be deleted from prewarmedPool due to unused
+    Thread.sleep(3.seconds.toMillis)
+
+    pool ! AdjustPrewarmedContainer
+    containers(0).expectMsg(Remove)
+    containers(1).expectMsg(Remove)
+    containers(0).send(pool, ContainerRemoved(false))
+    containers(1).send(pool, ContainerRemoved(false))
+
+    // Make sure prewarmed containers are deleted
+    Thread.sleep(2.seconds.toMillis)
+
+    stream.toString should include("prewarmed container is deleted")
+    stream.reset()
+  }
+
+  it should "adjust prewarm container run well without reactive config" in {
+    val (containers, factory) = testContainers(4)
+    val feed = TestProbe()
+
+    stream.reset()
+    val initialCount = 2
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(initialCount, exec, memoryLimit))))
+    containers(0).expectMsg(Start(exec, memoryLimit))
+    containers(1).expectMsg(Start(exec, memoryLimit))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind)))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind)))
+
+    // when invoker starts, include 0 prewarm container at the very beginning
+    stream.toString should include(s"currentCount: 0 prewarmed container")
+
+    // the desiredCount should equal with initialCount when invoker starts
+    stream.toString should include(s"needs ${initialCount} desired prewarmed container")
+
+    stream.reset()
+    pool ! AdjustPrewarmedContainer
+
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)
+
+    // Because already supplemented the prewarmed container, so currentCount should equal with initialCount
+    stream.toString should include(s"currentCount: ${initialCount} prewarmed container")
+  }
+
+  it should "adjust prewarm container run well with reactive config" in {
+    val (containers, factory) = testContainers(15)
+    val feed = TestProbe()
+
+    stream.reset()
+    val minCount = 0
+    val initialCount = 2
+    val maxCount = 4
+    val deadline: Option[Deadline] = Some(ttl.fromNow)
+    val reactive: Option[ReactivePrewarmingConfig] =
+      Some(ReactivePrewarmingConfig(minCount, maxCount, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 8),
+            feed.ref,
+            List(PrewarmingConfig(initialCount, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+
+    // when invoker starts, include 0 prewarm container at the very beginning
+    stream.toString should include(s"currentCount: 0 prewarmed container")
+
+    // the desiredCount should equal with initialCount when invoker starts
+    stream.toString should include(s"needs ${initialCount} desired prewarmed container")
+
+    stream.reset()
+
+    // Make sure the created prewarmed containers are expired
+    Thread.sleep(ttl.toMillis)
+    pool ! AdjustPrewarmedContainer
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)
+
+    containers(0).expectMsg(Remove)
+    containers(1).expectMsg(Remove)
+    containers(0).send(pool, ContainerRemoved(false))
+    containers(1).send(pool, ContainerRemoved(false))
+
+    // Because already supplemented the prewarmed container, so currentCount should equal with initialCount
+    stream.toString should include(s"currentCount: ${initialCount} prewarmed container")
+
+    // the desiredCount should equal with minCount because cold start didn't happen
+    stream.toString should include(s"needs ${minCount} desired prewarmed container")
+    // Previously created prewarmed containers should be removed
+    stream.toString should include(s"removed ${initialCount} expired prewarmed container")
+
+    stream.reset()
+    val action = ExecutableWhiskAction(
+      EntityPath("actionSpace"),
+      EntityName("actionName"),
+      exec,
+      limits = ActionLimits(memory = MemoryLimit(memoryLimit)))
+    val run = createRunMessage(action, invocationNamespace)
+    // 2 code start happened
+    pool ! run
+    pool ! run
+    containers(2).expectMsg(run)
+    containers(3).expectMsg(run)
+
+    pool ! AdjustPrewarmedContainer
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)
+
+    // Because already removed expired prewarmed containrs, so currentCount should equal with 0
+    stream.toString should include(s"currentCount: 0 prewarmed container")
+    // the desiredCount should equal with 2 due to cold start happened
+    stream.toString should include(s"needs 2 desired prewarmed container")
+
+    containers(4).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(5).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(4).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+    containers(5).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+
+    // Make sure the created prewarmed containers are expired
+    stream.reset()
+    Thread.sleep(ttl.toMillis)
+    pool ! AdjustPrewarmedContainer
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)
+
+    containers(4).expectMsg(Remove)
+    containers(5).expectMsg(Remove)
+    containers(4).send(pool, ContainerRemoved(false))
+    containers(5).send(pool, ContainerRemoved(false))
+
+    // removed previous 2 prewarmed container due to expired
+    stream.toString should include(s"removed 2 expired prewarmed container")
+
+    stream.reset()
+    // 5 code start happened(5 > maxCount)
+    pool ! run
+    pool ! run
+    pool ! run
+    pool ! run
+    pool ! run
+
+    containers(6).expectMsg(run)
+    containers(7).expectMsg(run)
+    containers(8).expectMsg(run)
+    containers(9).expectMsg(run)
+    containers(10).expectMsg(run)
+
+    pool ! AdjustPrewarmedContainer
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)

Review comment:
       Great! updated accordingly

##########
File path: tests/src/test/scala/org/apache/openwhisk/core/containerpool/test/ContainerPoolTests.scala
##########
@@ -753,14 +758,211 @@ class ContainerPoolTests
     containers(1).expectMsg(Start(exec, memoryLimit))
 
     //removing 2 prewarm containers will start 2 containers via backfill
-    containers(0).send(pool, ContainerRemoved)
-    containers(1).send(pool, ContainerRemoved)
+    containers(0).send(pool, ContainerRemoved(true))
+    containers(1).send(pool, ContainerRemoved(true))
     containers(2).expectMsg(Start(exec, memoryLimit))
     containers(3).expectMsg(Start(exec, memoryLimit))
     //make sure extra prewarms are not started
     containers(4).expectNoMessage(100.milliseconds)
     containers(5).expectNoMessage(100.milliseconds)
   }
+
+  it should "remove the prewarmed container after ttl time if unused" in {
+    val (containers, factory) = testContainers(2)
+    val feed = TestProbe()
+
+    val reactive: Option[ReactivePrewarmingConfig] = Some(ReactivePrewarmingConfig(0, 2, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(2, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    stream.reset()
+
+    // Make sure prewarmed containers can be deleted from prewarmedPool due to unused
+    Thread.sleep(3.seconds.toMillis)
+
+    pool ! AdjustPrewarmedContainer
+    containers(0).expectMsg(Remove)
+    containers(1).expectMsg(Remove)
+    containers(0).send(pool, ContainerRemoved(false))
+    containers(1).send(pool, ContainerRemoved(false))
+
+    // Make sure prewarmed containers are deleted
+    Thread.sleep(2.seconds.toMillis)
+
+    stream.toString should include("prewarmed container is deleted")
+    stream.reset()
+  }
+
+  it should "adjust prewarm container run well without reactive config" in {
+    val (containers, factory) = testContainers(4)
+    val feed = TestProbe()
+
+    stream.reset()
+    val initialCount = 2
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(initialCount, exec, memoryLimit))))
+    containers(0).expectMsg(Start(exec, memoryLimit))
+    containers(1).expectMsg(Start(exec, memoryLimit))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind)))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind)))
+
+    // when invoker starts, include 0 prewarm container at the very beginning
+    stream.toString should include(s"currentCount: 0 prewarmed container")
+
+    // the desiredCount should equal with initialCount when invoker starts
+    stream.toString should include(s"needs ${initialCount} desired prewarmed container")
+
+    stream.reset()
+    pool ! AdjustPrewarmedContainer
+
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)
+
+    // Because already supplemented the prewarmed container, so currentCount should equal with initialCount
+    stream.toString should include(s"currentCount: ${initialCount} prewarmed container")
+  }
+
+  it should "adjust prewarm container run well with reactive config" in {
+    val (containers, factory) = testContainers(15)
+    val feed = TestProbe()
+
+    stream.reset()
+    val minCount = 0
+    val initialCount = 2
+    val maxCount = 4
+    val deadline: Option[Deadline] = Some(ttl.fromNow)
+    val reactive: Option[ReactivePrewarmingConfig] =
+      Some(ReactivePrewarmingConfig(minCount, maxCount, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 8),
+            feed.ref,
+            List(PrewarmingConfig(initialCount, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+
+    // when invoker starts, include 0 prewarm container at the very beginning
+    stream.toString should include(s"currentCount: 0 prewarmed container")
+
+    // the desiredCount should equal with initialCount when invoker starts
+    stream.toString should include(s"needs ${initialCount} desired prewarmed container")
+
+    stream.reset()
+
+    // Make sure the created prewarmed containers are expired
+    Thread.sleep(ttl.toMillis)
+    pool ! AdjustPrewarmedContainer
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)
+
+    containers(0).expectMsg(Remove)
+    containers(1).expectMsg(Remove)
+    containers(0).send(pool, ContainerRemoved(false))
+    containers(1).send(pool, ContainerRemoved(false))
+
+    // Because already supplemented the prewarmed container, so currentCount should equal with initialCount
+    stream.toString should include(s"currentCount: ${initialCount} prewarmed container")
+
+    // the desiredCount should equal with minCount because cold start didn't happen
+    stream.toString should include(s"needs ${minCount} desired prewarmed container")
+    // Previously created prewarmed containers should be removed
+    stream.toString should include(s"removed ${initialCount} expired prewarmed container")
+
+    stream.reset()
+    val action = ExecutableWhiskAction(
+      EntityPath("actionSpace"),
+      EntityName("actionName"),
+      exec,
+      limits = ActionLimits(memory = MemoryLimit(memoryLimit)))
+    val run = createRunMessage(action, invocationNamespace)
+    // 2 code start happened
+    pool ! run
+    pool ! run
+    containers(2).expectMsg(run)
+    containers(3).expectMsg(run)
+
+    pool ! AdjustPrewarmedContainer
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)
+
+    // Because already removed expired prewarmed containrs, so currentCount should equal with 0
+    stream.toString should include(s"currentCount: 0 prewarmed container")
+    // the desiredCount should equal with 2 due to cold start happened
+    stream.toString should include(s"needs 2 desired prewarmed container")
+
+    containers(4).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(5).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(4).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+    containers(5).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+
+    // Make sure the created prewarmed containers are expired
+    stream.reset()
+    Thread.sleep(ttl.toMillis)
+    pool ! AdjustPrewarmedContainer
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)

Review comment:
       updated accordingly

##########
File path: tests/src/test/scala/org/apache/openwhisk/core/containerpool/test/ContainerPoolTests.scala
##########
@@ -753,14 +758,211 @@ class ContainerPoolTests
     containers(1).expectMsg(Start(exec, memoryLimit))
 
     //removing 2 prewarm containers will start 2 containers via backfill
-    containers(0).send(pool, ContainerRemoved)
-    containers(1).send(pool, ContainerRemoved)
+    containers(0).send(pool, ContainerRemoved(true))
+    containers(1).send(pool, ContainerRemoved(true))
     containers(2).expectMsg(Start(exec, memoryLimit))
     containers(3).expectMsg(Start(exec, memoryLimit))
     //make sure extra prewarms are not started
     containers(4).expectNoMessage(100.milliseconds)
     containers(5).expectNoMessage(100.milliseconds)
   }
+
+  it should "remove the prewarmed container after ttl time if unused" in {
+    val (containers, factory) = testContainers(2)
+    val feed = TestProbe()
+
+    val reactive: Option[ReactivePrewarmingConfig] = Some(ReactivePrewarmingConfig(0, 2, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(2, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    stream.reset()
+
+    // Make sure prewarmed containers can be deleted from prewarmedPool due to unused
+    Thread.sleep(3.seconds.toMillis)
+
+    pool ! AdjustPrewarmedContainer
+    containers(0).expectMsg(Remove)
+    containers(1).expectMsg(Remove)
+    containers(0).send(pool, ContainerRemoved(false))
+    containers(1).send(pool, ContainerRemoved(false))
+
+    // Make sure prewarmed containers are deleted
+    Thread.sleep(2.seconds.toMillis)
+
+    stream.toString should include("prewarmed container is deleted")
+    stream.reset()
+  }
+
+  it should "adjust prewarm container run well without reactive config" in {
+    val (containers, factory) = testContainers(4)
+    val feed = TestProbe()
+
+    stream.reset()
+    val initialCount = 2
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(initialCount, exec, memoryLimit))))
+    containers(0).expectMsg(Start(exec, memoryLimit))
+    containers(1).expectMsg(Start(exec, memoryLimit))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind)))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind)))
+
+    // when invoker starts, include 0 prewarm container at the very beginning
+    stream.toString should include(s"currentCount: 0 prewarmed container")
+
+    // the desiredCount should equal with initialCount when invoker starts
+    stream.toString should include(s"needs ${initialCount} desired prewarmed container")
+
+    stream.reset()
+    pool ! AdjustPrewarmedContainer
+
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)
+
+    // Because already supplemented the prewarmed container, so currentCount should equal with initialCount
+    stream.toString should include(s"currentCount: ${initialCount} prewarmed container")
+  }
+
+  it should "adjust prewarm container run well with reactive config" in {
+    val (containers, factory) = testContainers(15)
+    val feed = TestProbe()
+
+    stream.reset()
+    val minCount = 0
+    val initialCount = 2
+    val maxCount = 4
+    val deadline: Option[Deadline] = Some(ttl.fromNow)
+    val reactive: Option[ReactivePrewarmingConfig] =
+      Some(ReactivePrewarmingConfig(minCount, maxCount, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 8),
+            feed.ref,
+            List(PrewarmingConfig(initialCount, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+
+    // when invoker starts, include 0 prewarm container at the very beginning
+    stream.toString should include(s"currentCount: 0 prewarmed container")
+
+    // the desiredCount should equal with initialCount when invoker starts
+    stream.toString should include(s"needs ${initialCount} desired prewarmed container")
+
+    stream.reset()
+
+    // Make sure the created prewarmed containers are expired
+    Thread.sleep(ttl.toMillis)
+    pool ! AdjustPrewarmedContainer
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)
+
+    containers(0).expectMsg(Remove)
+    containers(1).expectMsg(Remove)
+    containers(0).send(pool, ContainerRemoved(false))
+    containers(1).send(pool, ContainerRemoved(false))
+
+    // Because already supplemented the prewarmed container, so currentCount should equal with initialCount
+    stream.toString should include(s"currentCount: ${initialCount} prewarmed container")
+
+    // the desiredCount should equal with minCount because cold start didn't happen
+    stream.toString should include(s"needs ${minCount} desired prewarmed container")
+    // Previously created prewarmed containers should be removed
+    stream.toString should include(s"removed ${initialCount} expired prewarmed container")
+
+    stream.reset()
+    val action = ExecutableWhiskAction(
+      EntityPath("actionSpace"),
+      EntityName("actionName"),
+      exec,
+      limits = ActionLimits(memory = MemoryLimit(memoryLimit)))
+    val run = createRunMessage(action, invocationNamespace)
+    // 2 code start happened
+    pool ! run
+    pool ! run
+    containers(2).expectMsg(run)
+    containers(3).expectMsg(run)
+
+    pool ! AdjustPrewarmedContainer
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)
+
+    // Because already removed expired prewarmed containrs, so currentCount should equal with 0
+    stream.toString should include(s"currentCount: 0 prewarmed container")
+    // the desiredCount should equal with 2 due to cold start happened
+    stream.toString should include(s"needs 2 desired prewarmed container")
+
+    containers(4).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(5).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(4).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+    containers(5).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+
+    // Make sure the created prewarmed containers are expired
+    stream.reset()
+    Thread.sleep(ttl.toMillis)
+    pool ! AdjustPrewarmedContainer
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)

Review comment:
       removed accordingly




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r413661060



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerProxy.scala
##########
@@ -146,7 +146,7 @@ case class PreWarmedData(override val container: Container,
                          kind: String,
                          override val memoryLimit: ByteSize,
                          override val activeActivationCount: Int = 0)
-    extends ContainerStarted(container, Instant.EPOCH, memoryLimit, activeActivationCount)
+    extends ContainerStarted(container, Instant.now(), memoryLimit, activeActivationCount)

Review comment:
       Maybe your method(I would move that property to ContainerInUse trait, and create a dedicated property in PrewarmedData for tracking expiration Instant or Deadline) is good,
   
   But when i wrote using your method, it seems i cannot wrote it well, can you check it in your local?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris edited a comment on pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris edited a comment on pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#issuecomment-619239158


   BTW I think the `takePrewarmContainer` needs enhancement as well, so that the "oldest" prewarm is always used - otherwise, it could end up where a random but recent prewarm is taken, and then all the others suddenly expire and are removed, leaving only the min. But if we always take the oldest, then there will be more left when expiring ones are removed.
   e.g. (assuming PrewarmedData.expires is a Option[Deadline] as commented elsewhere) 
   ```
       val now = Deadline.now
       prewarmedPool
         .toSeq
         .sortBy(_._2.expires.getOrElse(now))
         .find {
         ...
   ```
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r421949078



##########
File path: common/scala/src/main/scala/org/apache/openwhisk/core/entity/ExecManifest.scala
##########
@@ -344,9 +373,37 @@ protected[core] object ExecManifest {
 
   protected[entity] implicit val imageNameSerdes: RootJsonFormat[ImageName] = jsonFormat4(ImageName.apply)
 
-  protected[entity] implicit val stemCellSerdes: RootJsonFormat[StemCell] = {
+  protected[entity] implicit val ttlSerdes: RootJsonFormat[Duration] = new RootJsonFormat[Duration] {
+    override def write(duration: Duration): JsValue = JsString(duration.toString)
+
+    override def read(value: JsValue): Duration = value match {
+      case JsString(s) => Duration(s)
+      case _ =>
+        deserializationError("time unit not supported. Only milliseconds, seconds, minutes, hours, days are supported")
+    }
+  }
+
+  protected[entity] implicit val reactivePrewarmingConfigSerdes: RootJsonFormat[ReactivePrewarmingConfig] = jsonFormat5(
+    ReactivePrewarmingConfig.apply)
+
+  protected[entity] implicit val stemCellSerdes = new RootJsonFormat[StemCell] {
     import org.apache.openwhisk.core.entity.size.serdes
-    jsonFormat2(StemCell.apply)
+    import org.apache.openwhisk.core.entity.size.SizeInt
+    val defaultSerdes = jsonFormat3(StemCell.apply)
+    override def read(value: JsValue): StemCell = {
+      val fields = value.asJsObject.fields
+      val initialCount =
+        fields
+          .get("initialCount")
+          .orElse(fields.get("count"))
+          .map(_.convertTo[Int])
+          .getOrElse(1)
+      val memory = fields.get("memory").map(_.convertTo[ByteSize]).getOrElse(256.MB)

Review comment:
       I changed from `.getOrElse(1)` to `.get` and changed from `.getOrElse(256.MB)` to `.get` as well.
   Because the these two values is must required.
   If didn't give value in runtime.json, the invoker can't be started. e.g. invoker reported below error
   ```
   [2020-05-08T14:36:44.095Z] [ERROR] [#tid_sid_unknown] [Invoker] Invalid runtimes manifest: java.lang.IllegalArgumentException: requirement failed: initialCount must be positive
   [2020-05-08T14:34:55.912Z] [ERROR] [#tid_sid_unknown] [Invoker] Invalid runtimes manifest: java.lang.IllegalArgumentException: Size Unit not supported. Only "B", "K[B]", "M[B]" and "G[B]" are supported.
   ```
   
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r414030371



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -307,18 +384,19 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
     prewarmConfig.foreach { config =>
       val kind = config.exec.kind
       val memory = config.memoryLimit
+      val initialCount = config.initialCount
       val currentCount = prewarmedPool.count {
         case (_, PreWarmedData(_, `kind`, `memory`, _)) => true //done starting
         case _                                          => false //started but not finished starting
       }
       val startingCount = prewarmStartingPool.count(p => p._2._1 == kind && p._2._2 == memory)
       val containerCount = currentCount + startingCount
-      if (containerCount < config.count) {
+      if (containerCount < initialCount) {

Review comment:
       I think this should be something like:
   ```
   val requiredCount = if (init) { initialCount } else {minCount}
   if (containerCount < requiredCount) {
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r417018419



##########
File path: ansible/files/runtimes.json
##########
@@ -57,8 +57,15 @@
                 },
                 "stemCells": [
                     {
-                        "count": 2,
-                        "memory": "256 MB"
+                        "initialCount": 2,

Review comment:
       @rabbah ,i think we can change `count` to `initialCount` directly,
   
   And your said problem: `renaming the field would be a breaking change`, i think it is not a problem, because this pr's all changes in invoker side only, after applied this pr to new invoker, has no bad influences on old invokers and other components(e.g. controller).
   in spite of controller has `runtime` info, it just has only one effect: print the detail runtime info: https://github.com/apache/openwhisk/blob/master/core/controller/src/main/scala/org/apache/openwhisk/core/controller/Controller.scala#L222
   
   BTW, regrading `xxx.get(initialCount).orElse(count)`
   The solution is simple, but during impelment, i don't know how to support it. e.g. need to configure `initialCount` and `count` both in the runtimes.json? in `StemCell`, need to support `initialCount` and `count` both as well?
   For `xxx.get(initialCount).orElse(count)`, what's mean for `xxx` here?
   
   How about keep `initialCount` and `count` exist both in `runtimes.json` and `StemCell.class` and `PrewarmingConfig.class`?
   Next time, can remove the field `count`




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r413656602



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -80,8 +86,59 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
   //periodically emit metrics (don't need to do this for each message!)
   context.system.scheduler.schedule(30.seconds, 10.seconds, self, EmitMetrics)
 
+  // Key is `kind,memory,time`, value is the number of cold Start in minute
+  var coldStartCount = immutable.Map.empty[String, Int]
+
   backfillPrewarms(true)
 
+  // check periodically every 1 minute, delete prewarmed container if unused for sometime
+  context.system.scheduler.schedule(10.seconds, 1.minute, self, DeleteUnusedPrewarmedContainer)
+
+  // check periodically for the cold start and create some increment containers automatically if activation >= threshold
+  context.system.scheduler.schedule(1.minute, 1.minute, self, SupplementPrewarmedContainer)
+
+  def deleteUnusedPrewarmedContainer(): Unit = {
+    prewarmConfig.foreach { config =>
+      val kind = config.exec.kind
+      val memory = config.memoryLimit
+      val ttlSeconds = config.ttl.toSeconds
+      val containers = prewarmedPool.filter { warmInfo =>
+        warmInfo match {
+          case (_, PreWarmedData(_, `kind`, `memory`, _)) => true
+          case _                                          => false
+        }
+      }
+      for ((container, data) <- containers) {
+        if (JDuration.between(data.lastUsed, Instant.now).compareTo(JDuration.ofSeconds(ttlSeconds)) > 0) {
+          // Don't recover a new one under this situation
+          container ! RemovePreWarmedContainer

Review comment:
       If use `Remove` message to remove unused prewarmed container, this lead to another new prewarmed created after delete the unused prewarmed container.
   
   Logic as below
    firstly, codes will be executed here: https://github.com/apache/openwhisk/blob/master/core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerProxy.scala#L376, then after executed `context.parent ! ContainerRemoved` in `def destroyContainer`, codes will be executed here: https://github.com/apache/openwhisk/blob/master/core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala#L272
   
   So i used another message `RemovePreWarmedContainer` to distinguish them, and has another benefit, after deleted the prewarmed container, can print the prewarmed container's detail info via `PreWarmedContainerRemoved(prewarmedData: PreWarmedData)`'s `prewarmedData` field
   
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] codecov-commenter edited a comment on pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#issuecomment-630579281


   # [Codecov](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=h1) Report
   > Merging [#4871](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=desc) into [master](https://codecov.io/gh/apache/openwhisk/commit/a4122fdd58a2c0ce6eab09fdc3a8678a75af83b2&el=desc) will **decrease** coverage by `5.94%`.
   > The diff coverage is `93.12%`.
   
   [![Impacted file tree graph](https://codecov.io/gh/apache/openwhisk/pull/4871/graphs/tree.svg?width=650&height=150&src=pr&token=l0YmsiSAso)](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=tree)
   
   ```diff
   @@            Coverage Diff             @@
   ##           master    #4871      +/-   ##
   ==========================================
   - Coverage   83.19%   77.25%   -5.95%     
   ==========================================
     Files         201      201              
     Lines        9351     9441      +90     
     Branches      397      396       -1     
   ==========================================
   - Hits         7780     7294     -486     
   - Misses       1571     2147     +576     
   ```
   
   
   | [Impacted Files](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=tree) | Coverage Δ | |
   |---|---|---|
   | [...rg/apache/openwhisk/core/entity/ExecManifest.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29tbW9uL3NjYWxhL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvZW50aXR5L0V4ZWNNYW5pZmVzdC5zY2FsYQ==) | `91.83% <78.57%> (-5.39%)` | :arrow_down: |
   | [.../openwhisk/core/containerpool/ContainerProxy.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9pbnZva2VyL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvY29udGFpbmVycG9vbC9Db250YWluZXJQcm94eS5zY2FsYQ==) | `93.85% <85.00%> (+0.30%)` | :arrow_up: |
   | [...in/scala/org/apache/openwhisk/common/Logging.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29tbW9uL3NjYWxhL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvbW1vbi9Mb2dnaW5nLnNjYWxh) | `76.41% <100.00%> (-8.41%)` | :arrow_down: |
   | [...penwhisk/core/containerpool/ContainerFactory.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29tbW9uL3NjYWxhL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvY29udGFpbmVycG9vbC9Db250YWluZXJGYWN0b3J5LnNjYWxh) | `87.50% <100.00%> (+0.83%)` | :arrow_up: |
   | [...e/openwhisk/core/containerpool/ContainerPool.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9pbnZva2VyL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvY29udGFpbmVycG9vbC9Db250YWluZXJQb29sLnNjYWxh) | `97.45% <100.00%> (+1.80%)` | :arrow_up: |
   | [...pache/openwhisk/core/invoker/InvokerReactive.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9pbnZva2VyL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvaW52b2tlci9JbnZva2VyUmVhY3RpdmUuc2NhbGE=) | `79.64% <100.00%> (ø)` | |
   | [...core/database/cosmosdb/RxObservableImplicits.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29tbW9uL3NjYWxhL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvZGF0YWJhc2UvY29zbW9zZGIvUnhPYnNlcnZhYmxlSW1wbGljaXRzLnNjYWxh) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |
   | [...ore/database/cosmosdb/cache/CacheInvalidator.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9jb3Ntb3NkYi9jYWNoZS1pbnZhbGlkYXRvci9zcmMvbWFpbi9zY2FsYS9vcmcvYXBhY2hlL29wZW53aGlzay9jb3JlL2RhdGFiYXNlL2Nvc21vc2RiL2NhY2hlL0NhY2hlSW52YWxpZGF0b3Iuc2NhbGE=) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |
   | [...e/database/cosmosdb/cache/ChangeFeedConsumer.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9jb3Ntb3NkYi9jYWNoZS1pbnZhbGlkYXRvci9zcmMvbWFpbi9zY2FsYS9vcmcvYXBhY2hlL29wZW53aGlzay9jb3JlL2RhdGFiYXNlL2Nvc21vc2RiL2NhY2hlL0NoYW5nZUZlZWRDb25zdW1lci5zY2FsYQ==) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |
   | [...core/database/cosmosdb/CosmosDBArtifactStore.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29tbW9uL3NjYWxhL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvZGF0YWJhc2UvY29zbW9zZGIvQ29zbW9zREJBcnRpZmFjdFN0b3JlLnNjYWxh) | `0.00% <0.00%> (-96.23%)` | :arrow_down: |
   | ... and [19 more](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree-more) | |
   
   ------
   
   [Continue to review full report at Codecov](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=continue).
   > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
   > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
   > Powered by [Codecov](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=footer). Last update [a4122fd...5ae5fa4](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r421949381



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -80,8 +84,51 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
   //periodically emit metrics (don't need to do this for each message!)
   context.system.scheduler.schedule(30.seconds, 10.seconds, self, EmitMetrics)
 
+  // Key is ColdStartKey, value is the number of cold Start in minute
+  var coldStartCount = immutable.Map.empty[ColdStartKey, Int]
+
   backfillPrewarms(true)
 
+  // check periodically every 1 minute, adjust prewarmed container(delete if unused for some time and create some increment containers)
+  context.system.scheduler.schedule(1.minute, 1.minute, self, AdjustPrewarmedContainer)
+
+  def adjustPrewarmedContainer(): Unit = {
+    prewarmConfig.foreach { config =>
+      // Delete unused prewarmed container until minCount is reached
+      val kind = config.exec.kind
+      val memory = config.memoryLimit
+      config.reactive match {

Review comment:
       Updated accordingly




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r414108635



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerProxy.scala
##########
@@ -146,7 +146,7 @@ case class PreWarmedData(override val container: Container,
                          kind: String,
                          override val memoryLimit: ByteSize,
                          override val activeActivationCount: Int = 0)
-    extends ContainerStarted(container, Instant.EPOCH, memoryLimit, activeActivationCount)
+    extends ContainerStarted(container, Instant.now(), memoryLimit, activeActivationCount)

Review comment:
       I think it takes refactoring some or most of the uses of "ContainerData" generic type. For now you can just:
   * change prewarmedPool to type `var prewarmedPool = immutable.Map.empty[ActorRef, PreWarmedData]`
   * add a `val created = Instant.now()` to `PrewarmedData` type
   Then least the `ContainerData.lastUsed` property for use when container is actually allocated to an action.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r414037817



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -32,10 +36,14 @@ sealed trait WorkerState
 case object Busy extends WorkerState
 case object Free extends WorkerState
 
+case class ColdStartKey(kind: String, memory: Long, date: String)

Review comment:
       What is date used for? 
   Memory should be ByteSize for consistency elsewhere. 
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#issuecomment-621550211


   @rabbah in runtime.json's stemCell fileds, must configure `initialCount (or count)` and `memory`, reactive is optional.
   In order to keep code robust, i already changed to `.getOrElse(x)` on  initialCount filed, the same style is applied to `memory` as well.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#issuecomment-630363594


   @ningyougang Thanks for all your work on this! I think this PR is close to done - do you want to solicit feedback on the dev list once more to let other people chime in on this before it gets merged?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r420422242



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -363,7 +480,12 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
           // Create a new prewarm container
           // NOTE: prewarming ignores the action code in exec, but this is dangerous as the field is accessible to the
           // factory
-          prewarmContainer(action.exec, memory)
+          val ttl = data.expires match {
+            case Some(deadline) =>
+              Some(deadline.time)
+            case None => None
+          }
+          prewarmContainer(action.exec, memory, ttl)

Review comment:
       In that case, remove the `val ttl = data.expires match`, and just call `prewarmContainer(action.exec, memory, data.expires)`
   I think you'll need to change ttl type from `Duration` to `FiniteDuration`




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r425010456



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -303,26 +326,73 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
   }
 
   /** Install prewarm containers up to the configured requirements for each kind/memory combination. */
-  def backfillPrewarms(init: Boolean) = {
+  def adjustPrewarmedContainer(init: Boolean, scheduled: Boolean): Unit = {
     prewarmConfig.foreach { config =>
+      // Delete unused prewarmed container until minCount is reached
       val kind = config.exec.kind
       val memory = config.memoryLimit
-      val currentCount = prewarmedPool.count {
-        case (_, PreWarmedData(_, `kind`, `memory`, _)) => true //done starting
-        case _                                          => false //started but not finished starting
+
+      val runningCount = prewarmedPool.count {
+        //done starting, and not expired
+        case (_, p @ PreWarmedData(_, `kind`, `memory`, _, _)) if !p.isExpired() => true
+        //started but not finished starting (or expired)
+        case _ => false
       }
       val startingCount = prewarmStartingPool.count(p => p._2._1 == kind && p._2._2 == memory)
-      val containerCount = currentCount + startingCount
-      if (containerCount < config.count) {
+      val currentCount = runningCount + startingCount
+      //determine how many are needed
+      val desiredCount: Int =
+        if (init) config.initialCount
+        else {
+          if (scheduled) {
+            //scheduled/reactive config backfill
+            config.reactive
+              .map(c => getReactiveCold(c, kind, memory).getOrElse(c.minCount)) //reactive -> desired is either cold start driven, or minCount
+              .getOrElse(config.initialCount) //not reactive -> desired is always initial count
+          } else {
+            //normal backfill after removal - make sure at least minCount or initialCount is started
+            config.reactive.map(_.minCount).getOrElse(config.initialCount)
+          }
+        }
+
+      //remove expired
+      if (scheduled) {
+        config.reactive.foreach { reactiveValue =>
+          prewarmedPool
+            .filter { warmInfo =>
+              warmInfo match {
+                case (_, p @ PreWarmedData(_, `kind`, `memory`, _, _)) if p.isExpired() => true
+                case _                                                                  => false
+              }
+            }
+            .drop(reactiveValue.minCount) //keep minCount even if expired

Review comment:
       Yes, you are right, updated accordingly.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r414043012



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -340,6 +418,45 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
     newContainer ! Start(exec, memoryLimit)
   }
 
+  /** Create a new prewarm container if currentCount doesn't reach maxCount */
+  def prewarmContainerIfpossible(kind: String, memoryLimit: ByteSize, count: Int): Unit = {
+    prewarmConfig
+      .filter { config =>
+        kind == config.exec.kind && memoryLimit == config.memoryLimit
+      }
+      .foreach { config =>
+        val currentCount = prewarmedPool.count {
+          case (_, PreWarmedData(_, `kind`, `memoryLimit`, _)) => true //done starting
+          case _                                               => false //started but not finished starting
+        }
+        val startingCount = prewarmStartingPool.count(p => p._2._1 == kind && p._2._2 == memoryLimit)
+        val containerCount = currentCount + startingCount
+        if (containerCount < config.maxCount) {
+          val createNumber = if (config.maxCount - containerCount > count) count else config.maxCount - containerCount
+          logging.info(this, s"add ${createNumber} [kind: ${kind} memory: ${memoryLimit}] prewarmed containers")
+          1 to createNumber foreach { _ =>
+            prewarmContainer(config.exec, config.memoryLimit)
+          }
+        }
+      }
+  }
+
+  /** statistics the cold start */
+  def incrementColdStartCount(kind: String, memoryLimit: ByteSize): Unit = {
+    prewarmConfig

Review comment:
       I would clarify that this is only for cold start of prewarm configs, e.g. not blackbox or other configs. Maybe just a comment is fine, but this is very important.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r415528439



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -80,8 +87,58 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
   //periodically emit metrics (don't need to do this for each message!)
   context.system.scheduler.schedule(30.seconds, 10.seconds, self, EmitMetrics)
 
+  // Key is ColdStartKey, value is the number of cold Start in minute
+  var coldStartCount = immutable.Map.empty[ColdStartKey, Int]
+
   backfillPrewarms(true)
 
+  // check periodically every 1 minute, adjust prewarmed container(delete if unused for some time and create some increment containers)
+  context.system.scheduler.schedule(1.minute, 1.minute, self, AdjustPrewarmedContainer)
+
+  def adjustPrewarmedContainer(): Unit = {
+    prewarmConfig.foreach { config =>
+      // Delete unused prewarmed container until minCount is reached
+      val kind = config.exec.kind
+      val memory = config.memoryLimit
+      val ttlSeconds = config.ttl.toSeconds
+      val minCount = config.minCount
+      val containers = prewarmedPool.filter { warmInfo =>
+        warmInfo match {
+          case (_, PreWarmedData(_, `kind`, `memory`, _)) => true
+          case _                                          => false
+        }
+      }
+      var currentCount = containers.size
+      for ((container, data) <- containers) {
+        if (currentCount > minCount && JDuration
+              .between(data.lastUsed, Instant.now)
+              .compareTo(JDuration.ofSeconds(ttlSeconds)) > 0) {
+          container ! RemovePreWarmedContainer
+          currentCount -= 1
+        }
+      }
+
+      // supplement some prewarmed container if cold start happened
+      val calendar = Calendar.getInstance()
+      calendar.add(Calendar.MINUTE, -1)
+      val lastDate = new SimpleDateFormat("yyyy-MM-dd-HH:mm").format(calendar.getTime)
+      val coldStartKey = ColdStartKey(kind, memory.toMB, lastDate)
+      coldStartCount.get(coldStartKey) match {
+        case Some(value) =>
+          if (value >= config.threshold) {
+            val createdCount = value / config.increment
+            val count = if (createdCount > 0) createdCount else 1
+            prewarmContainerIfpossible(kind, memory, count)
+          } else {
+            // at lease create 1 prewarmed container
+            prewarmContainerIfpossible(kind, memory, 1)
+          }
+        case None =>
+      }
+      coldStartCount = coldStartCount - coldStartKey

Review comment:
       Updated accordingly




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r413648494



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -271,6 +336,13 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
       //backfill prewarms on every ContainerRemoved, just in case
       backfillPrewarms(false) //in case a prewarm is removed due to health failure or crash
 
+    // prewarmed container got removed
+    case PreWarmedContainerRemoved =>
+      prewarmedPool.get(sender()).foreach { _ =>
+        logging.info(this, "prewarmed container is deleted due to unused for long time")

Review comment:
       Already added




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r425488995



##########
File path: tests/src/test/scala/org/apache/openwhisk/core/containerpool/test/ContainerPoolTests.scala
##########
@@ -67,6 +68,9 @@ class ContainerPoolTests
   // the values is done properly.
   val exec = CodeExecAsString(RuntimeManifest("actionKind", ImageName("testImage")), "testCode", None)
   val memoryLimit = 256.MB
+  val ttl = FiniteDuration(2, TimeUnit.SECONDS)

Review comment:
       I would make ttl shorter - so that test runs don't take longer than necessary. Maybe 500.millis?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r421949078



##########
File path: common/scala/src/main/scala/org/apache/openwhisk/core/entity/ExecManifest.scala
##########
@@ -344,9 +373,37 @@ protected[core] object ExecManifest {
 
   protected[entity] implicit val imageNameSerdes: RootJsonFormat[ImageName] = jsonFormat4(ImageName.apply)
 
-  protected[entity] implicit val stemCellSerdes: RootJsonFormat[StemCell] = {
+  protected[entity] implicit val ttlSerdes: RootJsonFormat[Duration] = new RootJsonFormat[Duration] {
+    override def write(duration: Duration): JsValue = JsString(duration.toString)
+
+    override def read(value: JsValue): Duration = value match {
+      case JsString(s) => Duration(s)
+      case _ =>
+        deserializationError("time unit not supported. Only milliseconds, seconds, minutes, hours, days are supported")
+    }
+  }
+
+  protected[entity] implicit val reactivePrewarmingConfigSerdes: RootJsonFormat[ReactivePrewarmingConfig] = jsonFormat5(
+    ReactivePrewarmingConfig.apply)
+
+  protected[entity] implicit val stemCellSerdes = new RootJsonFormat[StemCell] {
     import org.apache.openwhisk.core.entity.size.serdes
-    jsonFormat2(StemCell.apply)
+    import org.apache.openwhisk.core.entity.size.SizeInt
+    val defaultSerdes = jsonFormat3(StemCell.apply)
+    override def read(value: JsValue): StemCell = {
+      val fields = value.asJsObject.fields
+      val initialCount =
+        fields
+          .get("initialCount")
+          .orElse(fields.get("count"))
+          .map(_.convertTo[Int])
+          .getOrElse(1)
+      val memory = fields.get("memory").map(_.convertTo[ByteSize]).getOrElse(256.MB)

Review comment:
       I changed from `.getOrElse(1)` to `.get` and changed from `.getOrElse(256.MB)` to `.get` as well.
   Because the these two values is must required.
   If didn't give value in runtime.json, the invoker can't be started. e.g. invoker reported below error
   ```
   [ERROR] [#tid_sid_unknown] [Invoker] Invalid runtimes manifest: java.lang.IllegalArgumentException: requirement failed: initialCount must be positive
   ...
   ...
    [ERROR] [#tid_sid_unknown] [Invoker] Invalid runtimes manifest: java.lang.IllegalArgumentException: Size Unit not supported. Only "B", "K[B]", "M[B]" and "G[B]" are supported.
   ```
   
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] codecov-io commented on pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
codecov-io commented on pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#issuecomment-621566016


   # [Codecov](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=h1) Report
   > Merging [#4871](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=desc) into [master](https://codecov.io/gh/apache/openwhisk/commit/d495993a54d81ccbd5c28f0aa971f254722b1f9d&el=desc) will **decrease** coverage by `6.02%`.
   > The diff coverage is `92.50%`.
   
   [![Impacted file tree graph](https://codecov.io/gh/apache/openwhisk/pull/4871/graphs/tree.svg?width=650&height=150&src=pr&token=l0YmsiSAso)](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=tree)
   
   ```diff
   @@            Coverage Diff             @@
   ##           master    #4871      +/-   ##
   ==========================================
   - Coverage   83.32%   77.29%   -6.03%     
   ==========================================
     Files         200      200              
     Lines        9283     9365      +82     
     Branches      383      384       +1     
   ==========================================
   - Hits         7735     7239     -496     
   - Misses       1548     2126     +578     
   ```
   
   
   | [Impacted Files](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=tree) | Coverage Δ | |
   |---|---|---|
   | [...rg/apache/openwhisk/core/entity/ExecManifest.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29tbW9uL3NjYWxhL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvZW50aXR5L0V4ZWNNYW5pZmVzdC5zY2FsYQ==) | `94.56% <86.36%> (-2.66%)` | :arrow_down: |
   | [.../openwhisk/core/containerpool/ContainerProxy.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9pbnZva2VyL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvY29udGFpbmVycG9vbC9Db250YWluZXJQcm94eS5zY2FsYQ==) | `93.91% <86.95%> (+0.34%)` | :arrow_up: |
   | [...e/openwhisk/core/containerpool/ContainerPool.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9pbnZva2VyL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvY29udGFpbmVycG9vbC9Db250YWluZXJQb29sLnNjYWxh) | `96.68% <95.94%> (+1.02%)` | :arrow_up: |
   | [...pache/openwhisk/core/invoker/InvokerReactive.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9pbnZva2VyL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvaW52b2tlci9JbnZva2VyUmVhY3RpdmUuc2NhbGE=) | `79.64% <100.00%> (ø)` | |
   | [...core/database/cosmosdb/RxObservableImplicits.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29tbW9uL3NjYWxhL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvZGF0YWJhc2UvY29zbW9zZGIvUnhPYnNlcnZhYmxlSW1wbGljaXRzLnNjYWxh) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |
   | [...ore/database/cosmosdb/cache/CacheInvalidator.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9jb3Ntb3NkYi9jYWNoZS1pbnZhbGlkYXRvci9zcmMvbWFpbi9zY2FsYS9vcmcvYXBhY2hlL29wZW53aGlzay9jb3JlL2RhdGFiYXNlL2Nvc21vc2RiL2NhY2hlL0NhY2hlSW52YWxpZGF0b3Iuc2NhbGE=) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |
   | [...e/database/cosmosdb/cache/ChangeFeedConsumer.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9jb3Ntb3NkYi9jYWNoZS1pbnZhbGlkYXRvci9zcmMvbWFpbi9zY2FsYS9vcmcvYXBhY2hlL29wZW53aGlzay9jb3JlL2RhdGFiYXNlL2Nvc21vc2RiL2NhY2hlL0NoYW5nZUZlZWRDb25zdW1lci5zY2FsYQ==) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |
   | [...core/database/cosmosdb/CosmosDBArtifactStore.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29tbW9uL3NjYWxhL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvZGF0YWJhc2UvY29zbW9zZGIvQ29zbW9zREJBcnRpZmFjdFN0b3JlLnNjYWxh) | `0.00% <0.00%> (-96.23%)` | :arrow_down: |
   | [...sk/core/database/cosmosdb/CosmosDBViewMapper.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29tbW9uL3NjYWxhL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvZGF0YWJhc2UvY29zbW9zZGIvQ29zbW9zREJWaWV3TWFwcGVyLnNjYWxh) | `0.00% <0.00%> (-93.90%)` | :arrow_down: |
   | [...tabase/cosmosdb/cache/CacheInvalidatorConfig.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9jb3Ntb3NkYi9jYWNoZS1pbnZhbGlkYXRvci9zcmMvbWFpbi9zY2FsYS9vcmcvYXBhY2hlL29wZW53aGlzay9jb3JlL2RhdGFiYXNlL2Nvc21vc2RiL2NhY2hlL0NhY2hlSW52YWxpZGF0b3JDb25maWcuc2NhbGE=) | `0.00% <0.00%> (-92.31%)` | :arrow_down: |
   | ... and [15 more](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree-more) | |
   
   ------
   
   [Continue to review full report at Codecov](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=continue).
   > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
   > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
   > Powered by [Codecov](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=footer). Last update [d495993...215fee3](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r414348730



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -80,8 +87,58 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
   //periodically emit metrics (don't need to do this for each message!)
   context.system.scheduler.schedule(30.seconds, 10.seconds, self, EmitMetrics)
 
+  // Key is ColdStartKey, value is the number of cold Start in minute
+  var coldStartCount = immutable.Map.empty[ColdStartKey, Int]
+
   backfillPrewarms(true)
 
+  // check periodically every 1 minute, adjust prewarmed container(delete if unused for some time and create some increment containers)
+  context.system.scheduler.schedule(1.minute, 1.minute, self, AdjustPrewarmedContainer)
+
+  def adjustPrewarmedContainer(): Unit = {

Review comment:
       hm.. it seems `adjustPrewarmedContainer`'s logic is not same as `backfillPrewarms`, and in its inner,  it seems can't extract the common code as well.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r421952189



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerProxy.scala
##########
@@ -279,6 +282,10 @@ class ContainerProxy(factory: (TransactionId,
   when(Uninitialized) {
     // pre warm a container (creates a stem cell container)
     case Event(job: Start, _) =>
+      val deadline = job.ttl match {
+        case Some(value) => Some(FiniteDuration(value.toMillis, TimeUnit.MILLISECONDS).fromNow)

Review comment:
       Updated accordingly




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] codecov-commenter edited a comment on pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
codecov-commenter edited a comment on pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#issuecomment-630579281


   # [Codecov](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=h1) Report
   > Merging [#4871](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=desc) into [master](https://codecov.io/gh/apache/openwhisk/commit/a4122fdd58a2c0ce6eab09fdc3a8678a75af83b2&el=desc) will **decrease** coverage by `5.94%`.
   > The diff coverage is `93.12%`.
   
   [![Impacted file tree graph](https://codecov.io/gh/apache/openwhisk/pull/4871/graphs/tree.svg?width=650&height=150&src=pr&token=l0YmsiSAso)](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=tree)
   
   ```diff
   @@            Coverage Diff             @@
   ##           master    #4871      +/-   ##
   ==========================================
   - Coverage   83.19%   77.25%   -5.95%     
   ==========================================
     Files         201      201              
     Lines        9351     9441      +90     
     Branches      397      396       -1     
   ==========================================
   - Hits         7780     7294     -486     
   - Misses       1571     2147     +576     
   ```
   
   
   | [Impacted Files](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=tree) | Coverage Δ | |
   |---|---|---|
   | [...rg/apache/openwhisk/core/entity/ExecManifest.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29tbW9uL3NjYWxhL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvZW50aXR5L0V4ZWNNYW5pZmVzdC5zY2FsYQ==) | `91.83% <78.57%> (-5.39%)` | :arrow_down: |
   | [.../openwhisk/core/containerpool/ContainerProxy.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9pbnZva2VyL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvY29udGFpbmVycG9vbC9Db250YWluZXJQcm94eS5zY2FsYQ==) | `93.85% <85.00%> (+0.30%)` | :arrow_up: |
   | [...in/scala/org/apache/openwhisk/common/Logging.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29tbW9uL3NjYWxhL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvbW1vbi9Mb2dnaW5nLnNjYWxh) | `76.41% <100.00%> (-8.41%)` | :arrow_down: |
   | [...penwhisk/core/containerpool/ContainerFactory.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29tbW9uL3NjYWxhL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvY29udGFpbmVycG9vbC9Db250YWluZXJGYWN0b3J5LnNjYWxh) | `87.50% <100.00%> (+0.83%)` | :arrow_up: |
   | [...e/openwhisk/core/containerpool/ContainerPool.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9pbnZva2VyL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvY29udGFpbmVycG9vbC9Db250YWluZXJQb29sLnNjYWxh) | `97.45% <100.00%> (+1.80%)` | :arrow_up: |
   | [...pache/openwhisk/core/invoker/InvokerReactive.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9pbnZva2VyL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvaW52b2tlci9JbnZva2VyUmVhY3RpdmUuc2NhbGE=) | `79.64% <100.00%> (ø)` | |
   | [...core/database/cosmosdb/RxObservableImplicits.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29tbW9uL3NjYWxhL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvZGF0YWJhc2UvY29zbW9zZGIvUnhPYnNlcnZhYmxlSW1wbGljaXRzLnNjYWxh) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |
   | [...ore/database/cosmosdb/cache/CacheInvalidator.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9jb3Ntb3NkYi9jYWNoZS1pbnZhbGlkYXRvci9zcmMvbWFpbi9zY2FsYS9vcmcvYXBhY2hlL29wZW53aGlzay9jb3JlL2RhdGFiYXNlL2Nvc21vc2RiL2NhY2hlL0NhY2hlSW52YWxpZGF0b3Iuc2NhbGE=) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |
   | [...e/database/cosmosdb/cache/ChangeFeedConsumer.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9jb3Ntb3NkYi9jYWNoZS1pbnZhbGlkYXRvci9zcmMvbWFpbi9zY2FsYS9vcmcvYXBhY2hlL29wZW53aGlzay9jb3JlL2RhdGFiYXNlL2Nvc21vc2RiL2NhY2hlL0NoYW5nZUZlZWRDb25zdW1lci5zY2FsYQ==) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |
   | [...core/database/cosmosdb/CosmosDBArtifactStore.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29tbW9uL3NjYWxhL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvZGF0YWJhc2UvY29zbW9zZGIvQ29zbW9zREJBcnRpZmFjdFN0b3JlLnNjYWxh) | `0.00% <0.00%> (-96.23%)` | :arrow_down: |
   | ... and [19 more](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree-more) | |
   
   ------
   
   [Continue to review full report at Codecov](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=continue).
   > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
   > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
   > Powered by [Codecov](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=footer). Last update [a4122fd...5ae5fa4](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r417067158



##########
File path: ansible/files/runtimes.json
##########
@@ -57,8 +57,15 @@
                 },
                 "stemCells": [
                     {
-                        "count": 2,
-                        "memory": "256 MB"
+                        "initialCount": 2,

Review comment:
       @rabbah , already added custom json deserializer to support `initialCount` and `count` in runtimes.json




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r420416949



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -363,7 +480,12 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
           // Create a new prewarm container
           // NOTE: prewarming ignores the action code in exec, but this is dangerous as the field is accessible to the
           // factory
-          prewarmContainer(action.exec, memory)
+          val ttl = data.expires match {
+            case Some(deadline) =>
+              Some(deadline.time)
+            case None => None
+          }
+          prewarmContainer(action.exec, memory, ttl)

Review comment:
       I don't think the deadline should propagate from the previous prewarm - it should be set new based on `ReactivePrewarmingConfig.ttl from now`




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r413658626



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerProxy.scala
##########
@@ -375,6 +377,8 @@ class ContainerProxy(factory: (TransactionId,
 
     case Event(Remove, data: PreWarmedData) => destroyContainer(data)
 
+    case Event(RemovePreWarmedContainer, data: PreWarmedData) => destoryPreWarmedContainer(data)

Review comment:
       Reason as below,  i wrote a new method `def destoryPreWarmedContainer(newData: PreWarmedData)`, it seems more clear.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r413648806



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -340,6 +418,47 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
     newContainer ! Start(exec, memoryLimit)
   }
 
+  /** Create a new prewarm container if currentCount doesn't reach maxCount */
+  def prewarmContainerIfpossible(kind: String, memoryLimit: ByteSize, count: Int): Unit = {
+    prewarmConfig
+      .filter { config =>
+        kind == config.exec.kind && memoryLimit == config.memoryLimit
+      }
+      .foreach { config =>
+        val currentCount = prewarmedPool.count {
+          case (_, PreWarmedData(_, `kind`, `memoryLimit`, _)) => true //done starting
+          case _                                               => false //started but not finished starting
+        }
+        val startingCount = prewarmStartingPool.count(p => p._2._1 == kind && p._2._2 == memoryLimit)
+        val containerCount = currentCount + startingCount
+        if (containerCount < config.maxCount) {
+          val createNumber = if (config.maxCount - containerCount > count) count else config.maxCount - containerCount
+          1 to createNumber foreach { _ =>
+            prewarmContainer(config.exec, config.memoryLimit)
+          }
+        }
+      }
+  }
+
+  /** statistics the cold start */
+  def countColdStart(kind: String, memoryLimit: ByteSize): Unit = {
+    prewarmConfig
+      .filter { config =>
+        kind == config.exec.kind && memoryLimit == config.memoryLimit
+      }
+      .foreach { _ =>
+        val time = new SimpleDateFormat("yyyy-MM-dd-HH:mm").format(Calendar.getInstance().getTime)
+        val key = s"${kind},${memoryLimit.toMB},${time}"
+        coldStartCount.get(key) match {
+          case Some(value) => coldStartCount = coldStartCount + (key -> (value + 1))
+          case None        => coldStartCount = coldStartCount + (key -> 1)
+        }
+        for ((k, v) <- coldStartCount) {
+          logging.info(this, s"===statistics the cold start, k: ${k}, v: ${v}")

Review comment:
       Already removed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r413649189



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -80,8 +86,59 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
   //periodically emit metrics (don't need to do this for each message!)
   context.system.scheduler.schedule(30.seconds, 10.seconds, self, EmitMetrics)
 
+  // Key is `kind,memory,time`, value is the number of cold Start in minute
+  var coldStartCount = immutable.Map.empty[String, Int]
+
   backfillPrewarms(true)
 
+  // check periodically every 1 minute, delete prewarmed container if unused for sometime
+  context.system.scheduler.schedule(10.seconds, 1.minute, self, DeleteUnusedPrewarmedContainer)
+
+  // check periodically for the cold start and create some increment containers automatically if activation >= threshold
+  context.system.scheduler.schedule(1.minute, 1.minute, self, SupplementPrewarmedContainer)

Review comment:
       Already used same message.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r417018419



##########
File path: ansible/files/runtimes.json
##########
@@ -57,8 +57,15 @@
                 },
                 "stemCells": [
                     {
-                        "count": 2,
-                        "memory": "256 MB"
+                        "initialCount": 2,

Review comment:
       @rabbah ,i think we can change `count` to `initialCount` directly,
   
   And your said problem: `renaming the field would be a breaking change`, i think it is not a problem, because this pr's all changes in invoker side only, after applied this pr to new invoker, has no bad influences on old invokers and other components(e.g. controller).
   in spite of controller has `runtime` info, it just has only one effect: print the detail runtime info: https://github.com/apache/openwhisk/blob/master/core/controller/src/main/scala/org/apache/openwhisk/core/controller/Controller.scala#L222
   
   BTW, regrading `xxx.get(initialCount).orElse(count)`
   The solution is simple, but during impelment, i don't know how to support it. e.g. need to configure `initialCount` and `count` both in the runtimes.json? in `StemCell`, need to support `initialCount` and `count` both as well?
   For `xxx.get(initialCount).orElse(count)`, what's mean for `xxx` here?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r414043463



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -340,6 +418,45 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
     newContainer ! Start(exec, memoryLimit)
   }
 
+  /** Create a new prewarm container if currentCount doesn't reach maxCount */
+  def prewarmContainerIfpossible(kind: String, memoryLimit: ByteSize, count: Int): Unit = {
+    prewarmConfig
+      .filter { config =>
+        kind == config.exec.kind && memoryLimit == config.memoryLimit
+      }
+      .foreach { config =>
+        val currentCount = prewarmedPool.count {
+          case (_, PreWarmedData(_, `kind`, `memoryLimit`, _)) => true //done starting
+          case _                                               => false //started but not finished starting
+        }
+        val startingCount = prewarmStartingPool.count(p => p._2._1 == kind && p._2._2 == memoryLimit)
+        val containerCount = currentCount + startingCount
+        if (containerCount < config.maxCount) {
+          val createNumber = if (config.maxCount - containerCount > count) count else config.maxCount - containerCount
+          logging.info(this, s"add ${createNumber} [kind: ${kind} memory: ${memoryLimit}] prewarmed containers")
+          1 to createNumber foreach { _ =>
+            prewarmContainer(config.exec, config.memoryLimit)
+          }
+        }
+      }
+  }
+
+  /** statistics the cold start */
+  def incrementColdStartCount(kind: String, memoryLimit: ByteSize): Unit = {
+    prewarmConfig
+      .filter { config =>
+        kind == config.exec.kind && memoryLimit == config.memoryLimit
+      }
+      .foreach { _ =>
+        val time = new SimpleDateFormat("yyyy-MM-dd-HH:mm").format(Calendar.getInstance().getTime)

Review comment:
       Don't need this time field in the key, i think.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r420408257



##########
File path: common/scala/src/main/scala/org/apache/openwhisk/core/entity/ExecManifest.scala
##########
@@ -344,9 +373,37 @@ protected[core] object ExecManifest {
 
   protected[entity] implicit val imageNameSerdes: RootJsonFormat[ImageName] = jsonFormat4(ImageName.apply)
 
-  protected[entity] implicit val stemCellSerdes: RootJsonFormat[StemCell] = {
+  protected[entity] implicit val ttlSerdes: RootJsonFormat[Duration] = new RootJsonFormat[Duration] {
+    override def write(duration: Duration): JsValue = JsString(duration.toString)
+
+    override def read(value: JsValue): Duration = value match {
+      case JsString(s) => Duration(s)
+      case _ =>
+        deserializationError("time unit not supported. Only milliseconds, seconds, minutes, hours, days are supported")
+    }
+  }
+
+  protected[entity] implicit val reactivePrewarmingConfigSerdes: RootJsonFormat[ReactivePrewarmingConfig] = jsonFormat5(
+    ReactivePrewarmingConfig.apply)
+
+  protected[entity] implicit val stemCellSerdes = new RootJsonFormat[StemCell] {
     import org.apache.openwhisk.core.entity.size.serdes
-    jsonFormat2(StemCell.apply)
+    import org.apache.openwhisk.core.entity.size.SizeInt
+    val defaultSerdes = jsonFormat3(StemCell.apply)
+    override def read(value: JsValue): StemCell = {
+      val fields = value.asJsObject.fields
+      val initialCount =
+        fields
+          .get("initialCount")
+          .orElse(fields.get("count"))
+          .map(_.convertTo[Int])
+          .getOrElse(1)
+      val memory = fields.get("memory").map(_.convertTo[ByteSize]).getOrElse(256.MB)

Review comment:
       There is currently no default for "count" or "memory", so I don't think these defaults should be here.  If they are missing, it is invalid.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r414347159



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -32,10 +36,14 @@ sealed trait WorkerState
 case object Busy extends WorkerState
 case object Free extends WorkerState
 
+case class ColdStartKey(kind: String, memory: Long, date: String)

Review comment:
       Updated accordingly




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang removed a comment on pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang removed a comment on pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#issuecomment-612709277


   Regarding this pr's travis-ci's `system test`, reported below error
   ```
   TASK [controller : wait until the Controller in this host is up and running] ***
   Sunday 12 April 2020  10:02:41 +0000 (0:00:01.132)       0:00:55.659 ********** 
   FAILED - RETRYING: wait until the Controller in this host is up and running (12 retries left).
   FAILED - RETRYING: wait until the Controller in this host is up and running (11 retries left).
   FAILED - RETRYING: wait until the Controller in this host is up and running (10 retries left).
   FAILED - RETRYING: wait until the Controller in this host is up and running (9 retries left).
   FAILED - RETRYING: wait until the Controller in this host is up and running (8 retries left).
   FAILED - RETRYING: wait until the Controller in this host is up and running (7 retries left).
   FAILED - RETRYING: wait until the Controller in this host is up and running (6 retries left).
   FAILED - RETRYING: wait until the Controller in this host is up and running (5 retries left).
   FAILED - RETRYING: wait until the Controller in this host is up and running (4 retries left).
   FAILED - RETRYING: wait until the Controller in this host is up and running (3 retries left).
   FAILED - RETRYING: wait until the Controller in this host is up and running (2 retries left).
   FAILED - RETRYING: wait until the Controller in this host is up and running (1 retries left).
   fatal: [controller0]: FAILED! => {"attempts": 12, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: Request failed: <urlopen error [Errno 111] Connection refused>", "redirected": false, "status": -1, "url": "https://172.17.0.1:10001/ping"}
   Status code was -1 and not [200]: Request failed: <urlopen error [Errno
   111] Connection refused>
   ```
   But i run in local's severl envs, the controller started successfully.
   I don't know why controller start failed in travis-ci, can check the travis-ci's controller logs?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r421951778



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -363,7 +480,12 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
           // Create a new prewarm container
           // NOTE: prewarming ignores the action code in exec, but this is dangerous as the field is accessible to the
           // factory
-          prewarmContainer(action.exec, memory)
+          val ttl = data.expires match {
+            case Some(deadline) =>
+              Some(deadline.time)
+            case None => None
+          }
+          prewarmContainer(action.exec, memory, ttl)

Review comment:
       I changed from runtime.json's reactive config's ttl is `FiniteDuration`
   
   Regarding `I guess deadline.time is same as configured ttl?`
   Yes
   
   Regarding `remove the val ttl = data.expires match, and just call prewarmContainer(action.exec, memory, data.expires)`
   hm, if that, need to change  `case class Start`'s third field to `Option[DeadLine]`, this lead to some test cases run failed and not easy to make them run successfully.
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r425014017



##########
File path: tests/src/test/scala/org/apache/openwhisk/core/containerpool/test/ContainerPoolTests.scala
##########
@@ -753,14 +757,102 @@ class ContainerPoolTests
     containers(1).expectMsg(Start(exec, memoryLimit))
 
     //removing 2 prewarm containers will start 2 containers via backfill
-    containers(0).send(pool, ContainerRemoved)
-    containers(1).send(pool, ContainerRemoved)
+    containers(0).send(pool, ContainerRemoved(true))
+    containers(1).send(pool, ContainerRemoved(true))
     containers(2).expectMsg(Start(exec, memoryLimit))
     containers(3).expectMsg(Start(exec, memoryLimit))
     //make sure extra prewarms are not started
     containers(4).expectNoMessage(100.milliseconds)
     containers(5).expectNoMessage(100.milliseconds)
   }
+
+  it should "remove the prewarmed container after ttl time if unused" in {
+    val (containers, factory) = testContainers(2)
+    val feed = TestProbe()
+
+    val reactive: Option[ReactivePrewarmingConfig] = Some(ReactivePrewarmingConfig(0, 2, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(2, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    stream.reset()
+
+    // Make sure prewarmed containers can be deleted from prewarmedPool due to unused
+    Thread.sleep(3.seconds.toMillis)
+
+    pool ! AdjustPrewarmedContainer
+    containers(0).expectMsg(Remove)
+    containers(1).expectMsg(Remove)
+    containers(0).send(pool, ContainerRemoved(false))
+    containers(1).send(pool, ContainerRemoved(false))
+
+    // Make sure prewarmed containers are deleted
+    Thread.sleep(2.seconds.toMillis)
+
+    stream.toString should include("prewarmed container is deleted")
+    stream.reset()
+  }
+
+  it should "supplement prewarmed container when doesn't have enough container to handle activation" in {
+    val (containers, factory) = testContainers(6)
+    val feed = TestProbe()
+
+    val deadline: Option[Deadline] = Some(ttl.fromNow)
+    val reactive: Option[ReactivePrewarmingConfig] = Some(ReactivePrewarmingConfig(0, 2, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(2, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+    stream.reset()
+
+    // Make sure prewarmed containers can be deleted from prewarmedPool due to unused
+    Thread.sleep(3.seconds.toMillis)
+
+    pool ! AdjustPrewarmedContainer

Review comment:
       * Regarding move `class ContainerPool's adjustPrewarmedContainer ` to `object ContainerPool`
   I tried, it seems it  is very complex, because adjustPrewarmedContainer  used several class variable, e.g. prewarmedPool, prewarmStartingPool, getReactiveCold(it uses coldStartCount), if do that, other places need to add these variables well when invoke `adjustPrewarmedContainer `
   * Regarding `additional tests dedicated to all logic in this function`
   I wrote extra test cases for that.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r413648243



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -80,8 +86,59 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
   //periodically emit metrics (don't need to do this for each message!)
   context.system.scheduler.schedule(30.seconds, 10.seconds, self, EmitMetrics)
 
+  // Key is `kind,memory,time`, value is the number of cold Start in minute
+  var coldStartCount = immutable.Map.empty[String, Int]
+
   backfillPrewarms(true)
 
+  // check periodically every 1 minute, delete prewarmed container if unused for sometime
+  context.system.scheduler.schedule(10.seconds, 1.minute, self, DeleteUnusedPrewarmedContainer)

Review comment:
       Already added `minCount` in runtimes.json




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r425491906



##########
File path: tests/src/test/scala/org/apache/openwhisk/core/containerpool/test/ContainerPoolTests.scala
##########
@@ -753,14 +758,211 @@ class ContainerPoolTests
     containers(1).expectMsg(Start(exec, memoryLimit))
 
     //removing 2 prewarm containers will start 2 containers via backfill
-    containers(0).send(pool, ContainerRemoved)
-    containers(1).send(pool, ContainerRemoved)
+    containers(0).send(pool, ContainerRemoved(true))
+    containers(1).send(pool, ContainerRemoved(true))
     containers(2).expectMsg(Start(exec, memoryLimit))
     containers(3).expectMsg(Start(exec, memoryLimit))
     //make sure extra prewarms are not started
     containers(4).expectNoMessage(100.milliseconds)
     containers(5).expectNoMessage(100.milliseconds)
   }
+
+  it should "remove the prewarmed container after ttl time if unused" in {
+    val (containers, factory) = testContainers(2)
+    val feed = TestProbe()
+
+    val reactive: Option[ReactivePrewarmingConfig] = Some(ReactivePrewarmingConfig(0, 2, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(2, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    stream.reset()
+
+    // Make sure prewarmed containers can be deleted from prewarmedPool due to unused
+    Thread.sleep(3.seconds.toMillis)
+
+    pool ! AdjustPrewarmedContainer
+    containers(0).expectMsg(Remove)
+    containers(1).expectMsg(Remove)
+    containers(0).send(pool, ContainerRemoved(false))
+    containers(1).send(pool, ContainerRemoved(false))
+
+    // Make sure prewarmed containers are deleted
+    Thread.sleep(2.seconds.toMillis)
+
+    stream.toString should include("prewarmed container is deleted")
+    stream.reset()
+  }
+
+  it should "adjust prewarm container run well without reactive config" in {
+    val (containers, factory) = testContainers(4)
+    val feed = TestProbe()
+
+    stream.reset()
+    val initialCount = 2
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(initialCount, exec, memoryLimit))))
+    containers(0).expectMsg(Start(exec, memoryLimit))
+    containers(1).expectMsg(Start(exec, memoryLimit))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind)))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind)))
+
+    // when invoker starts, include 0 prewarm container at the very beginning
+    stream.toString should include(s"currentCount: 0 prewarmed container")
+
+    // the desiredCount should equal with initialCount when invoker starts
+    stream.toString should include(s"needs ${initialCount} desired prewarmed container")
+
+    stream.reset()
+    pool ! AdjustPrewarmedContainer
+
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)
+
+    // Because already supplemented the prewarmed container, so currentCount should equal with initialCount
+    stream.toString should include(s"currentCount: ${initialCount} prewarmed container")
+  }
+
+  it should "adjust prewarm container run well with reactive config" in {
+    val (containers, factory) = testContainers(15)
+    val feed = TestProbe()
+
+    stream.reset()
+    val minCount = 0
+    val initialCount = 2
+    val maxCount = 4
+    val deadline: Option[Deadline] = Some(ttl.fromNow)
+    val reactive: Option[ReactivePrewarmingConfig] =
+      Some(ReactivePrewarmingConfig(minCount, maxCount, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 8),
+            feed.ref,
+            List(PrewarmingConfig(initialCount, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+
+    // when invoker starts, include 0 prewarm container at the very beginning
+    stream.toString should include(s"currentCount: 0 prewarmed container")
+
+    // the desiredCount should equal with initialCount when invoker starts
+    stream.toString should include(s"needs ${initialCount} desired prewarmed container")
+
+    stream.reset()
+
+    // Make sure the created prewarmed containers are expired
+    Thread.sleep(ttl.toMillis)
+    pool ! AdjustPrewarmedContainer
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)
+
+    containers(0).expectMsg(Remove)
+    containers(1).expectMsg(Remove)
+    containers(0).send(pool, ContainerRemoved(false))
+    containers(1).send(pool, ContainerRemoved(false))
+
+    // Because already supplemented the prewarmed container, so currentCount should equal with initialCount
+    stream.toString should include(s"currentCount: ${initialCount} prewarmed container")
+
+    // the desiredCount should equal with minCount because cold start didn't happen
+    stream.toString should include(s"needs ${minCount} desired prewarmed container")
+    // Previously created prewarmed containers should be removed
+    stream.toString should include(s"removed ${initialCount} expired prewarmed container")
+
+    stream.reset()
+    val action = ExecutableWhiskAction(
+      EntityPath("actionSpace"),
+      EntityName("actionName"),
+      exec,
+      limits = ActionLimits(memory = MemoryLimit(memoryLimit)))
+    val run = createRunMessage(action, invocationNamespace)
+    // 2 code start happened
+    pool ! run
+    pool ! run
+    containers(2).expectMsg(run)
+    containers(3).expectMsg(run)
+
+    pool ! AdjustPrewarmedContainer
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)
+
+    // Because already removed expired prewarmed containrs, so currentCount should equal with 0
+    stream.toString should include(s"currentCount: 0 prewarmed container")
+    // the desiredCount should equal with 2 due to cold start happened
+    stream.toString should include(s"needs 2 desired prewarmed container")
+
+    containers(4).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(5).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(4).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+    containers(5).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+
+    // Make sure the created prewarmed containers are expired
+    stream.reset()
+    Thread.sleep(ttl.toMillis)
+    pool ! AdjustPrewarmedContainer
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)
+
+    containers(4).expectMsg(Remove)
+    containers(5).expectMsg(Remove)
+    containers(4).send(pool, ContainerRemoved(false))
+    containers(5).send(pool, ContainerRemoved(false))
+
+    // removed previous 2 prewarmed container due to expired
+    stream.toString should include(s"removed 2 expired prewarmed container")
+
+    stream.reset()
+    // 5 code start happened(5 > maxCount)
+    pool ! run
+    pool ! run
+    pool ! run
+    pool ! run
+    pool ! run
+
+    containers(6).expectMsg(run)
+    containers(7).expectMsg(run)
+    containers(8).expectMsg(run)
+    containers(9).expectMsg(run)
+    containers(10).expectMsg(run)
+
+    pool ! AdjustPrewarmedContainer
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)

Review comment:
       I think you can avoid most of the `Thread.sleep` usage - e.g. use eventually, or similar 
   ```
   eventually { 
     stream.toString should include(s"currentCount: 0 prewarmed container")
   }
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r425492050



##########
File path: tests/src/test/scala/org/apache/openwhisk/core/containerpool/test/ContainerPoolTests.scala
##########
@@ -753,14 +758,211 @@ class ContainerPoolTests
     containers(1).expectMsg(Start(exec, memoryLimit))
 
     //removing 2 prewarm containers will start 2 containers via backfill
-    containers(0).send(pool, ContainerRemoved)
-    containers(1).send(pool, ContainerRemoved)
+    containers(0).send(pool, ContainerRemoved(true))
+    containers(1).send(pool, ContainerRemoved(true))
     containers(2).expectMsg(Start(exec, memoryLimit))
     containers(3).expectMsg(Start(exec, memoryLimit))
     //make sure extra prewarms are not started
     containers(4).expectNoMessage(100.milliseconds)
     containers(5).expectNoMessage(100.milliseconds)
   }
+
+  it should "remove the prewarmed container after ttl time if unused" in {
+    val (containers, factory) = testContainers(2)
+    val feed = TestProbe()
+
+    val reactive: Option[ReactivePrewarmingConfig] = Some(ReactivePrewarmingConfig(0, 2, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(2, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    stream.reset()
+
+    // Make sure prewarmed containers can be deleted from prewarmedPool due to unused
+    Thread.sleep(3.seconds.toMillis)
+
+    pool ! AdjustPrewarmedContainer
+    containers(0).expectMsg(Remove)
+    containers(1).expectMsg(Remove)
+    containers(0).send(pool, ContainerRemoved(false))
+    containers(1).send(pool, ContainerRemoved(false))
+
+    // Make sure prewarmed containers are deleted
+    Thread.sleep(2.seconds.toMillis)
+
+    stream.toString should include("prewarmed container is deleted")
+    stream.reset()
+  }
+
+  it should "adjust prewarm container run well without reactive config" in {
+    val (containers, factory) = testContainers(4)
+    val feed = TestProbe()
+
+    stream.reset()
+    val initialCount = 2
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(initialCount, exec, memoryLimit))))
+    containers(0).expectMsg(Start(exec, memoryLimit))
+    containers(1).expectMsg(Start(exec, memoryLimit))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind)))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind)))
+
+    // when invoker starts, include 0 prewarm container at the very beginning
+    stream.toString should include(s"currentCount: 0 prewarmed container")
+
+    // the desiredCount should equal with initialCount when invoker starts
+    stream.toString should include(s"needs ${initialCount} desired prewarmed container")
+
+    stream.reset()
+    pool ! AdjustPrewarmedContainer
+
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)
+
+    // Because already supplemented the prewarmed container, so currentCount should equal with initialCount
+    stream.toString should include(s"currentCount: ${initialCount} prewarmed container")
+  }
+
+  it should "adjust prewarm container run well with reactive config" in {
+    val (containers, factory) = testContainers(15)
+    val feed = TestProbe()
+
+    stream.reset()
+    val minCount = 0
+    val initialCount = 2
+    val maxCount = 4
+    val deadline: Option[Deadline] = Some(ttl.fromNow)
+    val reactive: Option[ReactivePrewarmingConfig] =
+      Some(ReactivePrewarmingConfig(minCount, maxCount, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 8),
+            feed.ref,
+            List(PrewarmingConfig(initialCount, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+
+    // when invoker starts, include 0 prewarm container at the very beginning
+    stream.toString should include(s"currentCount: 0 prewarmed container")
+
+    // the desiredCount should equal with initialCount when invoker starts
+    stream.toString should include(s"needs ${initialCount} desired prewarmed container")
+
+    stream.reset()
+
+    // Make sure the created prewarmed containers are expired
+    Thread.sleep(ttl.toMillis)
+    pool ! AdjustPrewarmedContainer
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)
+
+    containers(0).expectMsg(Remove)
+    containers(1).expectMsg(Remove)
+    containers(0).send(pool, ContainerRemoved(false))
+    containers(1).send(pool, ContainerRemoved(false))
+
+    // Because already supplemented the prewarmed container, so currentCount should equal with initialCount
+    stream.toString should include(s"currentCount: ${initialCount} prewarmed container")
+
+    // the desiredCount should equal with minCount because cold start didn't happen
+    stream.toString should include(s"needs ${minCount} desired prewarmed container")
+    // Previously created prewarmed containers should be removed
+    stream.toString should include(s"removed ${initialCount} expired prewarmed container")
+
+    stream.reset()
+    val action = ExecutableWhiskAction(
+      EntityPath("actionSpace"),
+      EntityName("actionName"),
+      exec,
+      limits = ActionLimits(memory = MemoryLimit(memoryLimit)))
+    val run = createRunMessage(action, invocationNamespace)
+    // 2 code start happened
+    pool ! run
+    pool ! run
+    containers(2).expectMsg(run)
+    containers(3).expectMsg(run)
+
+    pool ! AdjustPrewarmedContainer
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)
+
+    // Because already removed expired prewarmed containrs, so currentCount should equal with 0
+    stream.toString should include(s"currentCount: 0 prewarmed container")
+    // the desiredCount should equal with 2 due to cold start happened
+    stream.toString should include(s"needs 2 desired prewarmed container")
+
+    containers(4).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(5).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(4).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+    containers(5).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+
+    // Make sure the created prewarmed containers are expired
+    stream.reset()
+    Thread.sleep(ttl.toMillis)
+    pool ! AdjustPrewarmedContainer
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)

Review comment:
       can you remove this sleep?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r425490217



##########
File path: tests/src/test/scala/org/apache/openwhisk/core/containerpool/test/ContainerPoolTests.scala
##########
@@ -753,14 +758,211 @@ class ContainerPoolTests
     containers(1).expectMsg(Start(exec, memoryLimit))
 
     //removing 2 prewarm containers will start 2 containers via backfill
-    containers(0).send(pool, ContainerRemoved)
-    containers(1).send(pool, ContainerRemoved)
+    containers(0).send(pool, ContainerRemoved(true))
+    containers(1).send(pool, ContainerRemoved(true))
     containers(2).expectMsg(Start(exec, memoryLimit))
     containers(3).expectMsg(Start(exec, memoryLimit))
     //make sure extra prewarms are not started
     containers(4).expectNoMessage(100.milliseconds)
     containers(5).expectNoMessage(100.milliseconds)
   }
+
+  it should "remove the prewarmed container after ttl time if unused" in {
+    val (containers, factory) = testContainers(2)
+    val feed = TestProbe()
+
+    val reactive: Option[ReactivePrewarmingConfig] = Some(ReactivePrewarmingConfig(0, 2, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(2, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    stream.reset()
+
+    // Make sure prewarmed containers can be deleted from prewarmedPool due to unused
+    Thread.sleep(3.seconds.toMillis)
+
+    pool ! AdjustPrewarmedContainer
+    containers(0).expectMsg(Remove)
+    containers(1).expectMsg(Remove)
+    containers(0).send(pool, ContainerRemoved(false))
+    containers(1).send(pool, ContainerRemoved(false))
+
+    // Make sure prewarmed containers are deleted
+    Thread.sleep(2.seconds.toMillis)

Review comment:
       I would leave out this sleep (and the log check) - receiving the Remove message should be enough to make sure it is removed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r425545253



##########
File path: tests/src/test/scala/org/apache/openwhisk/core/containerpool/test/ContainerPoolTests.scala
##########
@@ -753,14 +758,211 @@ class ContainerPoolTests
     containers(1).expectMsg(Start(exec, memoryLimit))
 
     //removing 2 prewarm containers will start 2 containers via backfill
-    containers(0).send(pool, ContainerRemoved)
-    containers(1).send(pool, ContainerRemoved)
+    containers(0).send(pool, ContainerRemoved(true))
+    containers(1).send(pool, ContainerRemoved(true))
     containers(2).expectMsg(Start(exec, memoryLimit))
     containers(3).expectMsg(Start(exec, memoryLimit))
     //make sure extra prewarms are not started
     containers(4).expectNoMessage(100.milliseconds)
     containers(5).expectNoMessage(100.milliseconds)
   }
+
+  it should "remove the prewarmed container after ttl time if unused" in {
+    val (containers, factory) = testContainers(2)
+    val feed = TestProbe()
+
+    val reactive: Option[ReactivePrewarmingConfig] = Some(ReactivePrewarmingConfig(0, 2, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(2, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    stream.reset()
+
+    // Make sure prewarmed containers can be deleted from prewarmedPool due to unused
+    Thread.sleep(3.seconds.toMillis)
+
+    pool ! AdjustPrewarmedContainer
+    containers(0).expectMsg(Remove)
+    containers(1).expectMsg(Remove)
+    containers(0).send(pool, ContainerRemoved(false))
+    containers(1).send(pool, ContainerRemoved(false))
+
+    // Make sure prewarmed containers are deleted
+    Thread.sleep(2.seconds.toMillis)

Review comment:
       updated accordingly




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r414792363



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -80,8 +87,58 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
   //periodically emit metrics (don't need to do this for each message!)
   context.system.scheduler.schedule(30.seconds, 10.seconds, self, EmitMetrics)
 
+  // Key is ColdStartKey, value is the number of cold Start in minute
+  var coldStartCount = immutable.Map.empty[ColdStartKey, Int]
+
   backfillPrewarms(true)
 
+  // check periodically every 1 minute, adjust prewarmed container(delete if unused for some time and create some increment containers)
+  context.system.scheduler.schedule(1.minute, 1.minute, self, AdjustPrewarmedContainer)
+
+  def adjustPrewarmedContainer(): Unit = {
+    prewarmConfig.foreach { config =>
+      // Delete unused prewarmed container until minCount is reached
+      val kind = config.exec.kind
+      val memory = config.memoryLimit
+      val ttlSeconds = config.ttl.toSeconds
+      val minCount = config.minCount
+      val containers = prewarmedPool.filter { warmInfo =>
+        warmInfo match {
+          case (_, PreWarmedData(_, `kind`, `memory`, _)) => true
+          case _                                          => false
+        }
+      }
+      var currentCount = containers.size
+      for ((container, data) <- containers) {
+        if (currentCount > minCount && JDuration
+              .between(data.lastUsed, Instant.now)
+              .compareTo(JDuration.ofSeconds(ttlSeconds)) > 0) {
+          container ! RemovePreWarmedContainer
+          currentCount -= 1
+        }
+      }
+
+      // supplement some prewarmed container if cold start happened
+      val calendar = Calendar.getInstance()
+      calendar.add(Calendar.MINUTE, -1)
+      val lastDate = new SimpleDateFormat("yyyy-MM-dd-HH:mm").format(calendar.getTime)
+      val coldStartKey = ColdStartKey(kind, memory.toMB, lastDate)
+      coldStartCount.get(coldStartKey) match {
+        case Some(value) =>
+          if (value >= config.threshold) {
+            val createdCount = value / config.increment
+            val count = if (createdCount > 0) createdCount else 1
+            prewarmContainerIfpossible(kind, memory, count)
+          } else {
+            // at lease create 1 prewarmed container
+            prewarmContainerIfpossible(kind, memory, 1)
+          }
+        case None =>
+      }
+      coldStartCount = coldStartCount - coldStartKey

Review comment:
       I mean that it might be good to simple reset the map at once, instead of removing each key separately.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r413658626



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerProxy.scala
##########
@@ -375,6 +377,8 @@ class ContainerProxy(factory: (TransactionId,
 
     case Event(Remove, data: PreWarmedData) => destroyContainer(data)
 
+    case Event(RemovePreWarmedContainer, data: PreWarmedData) => destoryPreWarmedContainer(data)

Review comment:
       Reason as below,  i wrote a new method `def destoryPreWarmedContainer(newData: PreWarmedData)`, in method inner, important point is
   ```
   context.parent ! PreWarmedContainerRemoved(newData)
   ```
   after containerPool actor recevied the message, can print the prewarmed container's detail info(e.g. kind/memory)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r414025340



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -80,8 +86,59 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
   //periodically emit metrics (don't need to do this for each message!)
   context.system.scheduler.schedule(30.seconds, 10.seconds, self, EmitMetrics)
 
+  // Key is `kind,memory,time`, value is the number of cold Start in minute
+  var coldStartCount = immutable.Map.empty[String, Int]
+
   backfillPrewarms(true)
 
+  // check periodically every 1 minute, delete prewarmed container if unused for sometime
+  context.system.scheduler.schedule(10.seconds, 1.minute, self, DeleteUnusedPrewarmedContainer)
+
+  // check periodically for the cold start and create some increment containers automatically if activation >= threshold
+  context.system.scheduler.schedule(1.minute, 1.minute, self, SupplementPrewarmedContainer)
+
+  def deleteUnusedPrewarmedContainer(): Unit = {
+    prewarmConfig.foreach { config =>
+      val kind = config.exec.kind
+      val memory = config.memoryLimit
+      val ttlSeconds = config.ttl.toSeconds
+      val containers = prewarmedPool.filter { warmInfo =>
+        warmInfo match {
+          case (_, PreWarmedData(_, `kind`, `memory`, _)) => true
+          case _                                          => false
+        }
+      }
+      for ((container, data) <- containers) {
+        if (JDuration.between(data.lastUsed, Instant.now).compareTo(JDuration.ofSeconds(ttlSeconds)) > 0) {
+          // Don't recover a new one under this situation
+          container ! RemovePreWarmedContainer

Review comment:
       I see. In that case, I suggest changing `case object ContainerRemoved` to `case class ContainerRemoved(replacePrewarm:Booleam)` so that we can limit the duplicated logic in `ContainerProxy` - would that work?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r425011031



##########
File path: tests/src/test/scala/org/apache/openwhisk/core/containerpool/test/ContainerPoolTests.scala
##########
@@ -753,14 +757,102 @@ class ContainerPoolTests
     containers(1).expectMsg(Start(exec, memoryLimit))
 
     //removing 2 prewarm containers will start 2 containers via backfill
-    containers(0).send(pool, ContainerRemoved)
-    containers(1).send(pool, ContainerRemoved)
+    containers(0).send(pool, ContainerRemoved(true))
+    containers(1).send(pool, ContainerRemoved(true))
     containers(2).expectMsg(Start(exec, memoryLimit))
     containers(3).expectMsg(Start(exec, memoryLimit))
     //make sure extra prewarms are not started
     containers(4).expectNoMessage(100.milliseconds)
     containers(5).expectNoMessage(100.milliseconds)
   }
+
+  it should "remove the prewarmed container after ttl time if unused" in {
+    val (containers, factory) = testContainers(2)
+    val feed = TestProbe()
+
+    val reactive: Option[ReactivePrewarmingConfig] = Some(ReactivePrewarmingConfig(0, 2, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(2, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    stream.reset()
+
+    // Make sure prewarmed containers can be deleted from prewarmedPool due to unused
+    Thread.sleep(3.seconds.toMillis)
+
+    pool ! AdjustPrewarmedContainer
+    containers(0).expectMsg(Remove)
+    containers(1).expectMsg(Remove)
+    containers(0).send(pool, ContainerRemoved(false))
+    containers(1).send(pool, ContainerRemoved(false))
+
+    // Make sure prewarmed containers are deleted
+    Thread.sleep(2.seconds.toMillis)
+
+    stream.toString should include("prewarmed container is deleted")
+    stream.reset()
+  }
+
+  it should "supplement prewarmed container when doesn't have enough container to handle activation" in {
+    val (containers, factory) = testContainers(6)
+    val feed = TestProbe()
+
+    val deadline: Option[Deadline] = Some(ttl.fromNow)
+    val reactive: Option[ReactivePrewarmingConfig] = Some(ReactivePrewarmingConfig(0, 2, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(2, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+    stream.reset()
+
+    // Make sure prewarmed containers can be deleted from prewarmedPool due to unused
+    Thread.sleep(3.seconds.toMillis)
+
+    pool ! AdjustPrewarmedContainer
+    containers(0).expectMsg(Remove)
+    containers(1).expectMsg(Remove)
+    containers(0).send(pool, ContainerRemoved(false))
+    containers(1).send(pool, ContainerRemoved(false))
+
+    val action = ExecutableWhiskAction(
+      EntityPath("actionSpace"),
+      EntityName("actionName"),
+      exec,
+      limits = ActionLimits(memory = MemoryLimit(memoryLimit)))
+    val run = createRunMessage(action, invocationNamespace)
+
+    // Make sure prewarmed containers are deleted
+    Thread.sleep(2.seconds.toMillis)
+
+    pool ! run
+    pool ! run
+    containers(2).expectMsg(run)
+    containers(3).expectMsg(run)
+
+    // Make sure cold start in previous 1 minute can be counted
+    Thread.sleep(60.seconds.toMillis)

Review comment:
       Already make schedule period configurable.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r425547151



##########
File path: ansible/roles/invoker/tasks/deploy.yml
##########
@@ -278,6 +278,7 @@
       "CONFIG_whisk_invoker_https_keystorePassword": "{{ invoker.ssl.keystore.password }}"
       "CONFIG_whisk_invoker_https_keystoreFlavor": "{{ invoker.ssl.storeFlavor }}"
       "CONFIG_whisk_invoker_https_clientAuth": "{{ invoker.ssl.clientAuth }}"
+      "CONFIG_whisk_containerPool_prewarmExpiredCheckPeriod": "{{ container_pool_prewarmExpiredCheckPeriod | default('1 minute') }}"

Review comment:
       Great, updated accordingly.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r425545095



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -303,26 +326,94 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
   }
 
   /** Install prewarm containers up to the configured requirements for each kind/memory combination. */
-  def backfillPrewarms(init: Boolean) = {
+  def adjustPrewarmedContainer(init: Boolean, scheduled: Boolean): Unit = {
     prewarmConfig.foreach { config =>
+      // Delete unused prewarmed container until minCount is reached
       val kind = config.exec.kind
       val memory = config.memoryLimit
-      val currentCount = prewarmedPool.count {
-        case (_, PreWarmedData(_, `kind`, `memory`, _)) => true //done starting
-        case _                                          => false //started but not finished starting
+
+      val runningCount = prewarmedPool.count {
+        // done starting, and not expired
+        case (_, p @ PreWarmedData(_, `kind`, `memory`, _, _)) if !p.isExpired() => true
+        // started but not finished starting (or expired)
+        case _ => false
       }
       val startingCount = prewarmStartingPool.count(p => p._2._1 == kind && p._2._2 == memory)
-      val containerCount = currentCount + startingCount
-      if (containerCount < config.count) {
+      val currentCount = runningCount + startingCount
+
+      logging.info(
+        this,
+        s"[kind: ${kind} memory: ${memory.toString}] currentCount: ${currentCount} prewarmed container")
+
+      // determine how many are needed
+      val desiredCount: Int =
+        if (init) config.initialCount
+        else {
+          if (scheduled) {
+            // scheduled/reactive config backfill
+            config.reactive
+              .map(c => getReactiveCold(c, kind, memory).getOrElse(c.minCount)) //reactive -> desired is either cold start driven, or minCount
+              .getOrElse(config.initialCount) //not reactive -> desired is always initial count
+          } else {
+            // normal backfill after removal - make sure at least minCount or initialCount is started
+            config.reactive.map(_.minCount).getOrElse(config.initialCount)
+          }
+        }
+
+      logging.info(
+        this,
+        s"[kind: ${kind} memory: ${memory.toString}] needs ${desiredCount} desired prewarmed container")
+
+      // remove expired
+      config.reactive.foreach { _ =>
+        val expiredPrewarmedContainer = prewarmedPool
+          .filter { warmInfo =>
+            warmInfo match {
+              case (_, p @ PreWarmedData(_, `kind`, `memory`, _, _)) if p.isExpired() => true
+              case _                                                                  => false
+            }
+          }
+        // emit expired container counter metric with memory + kind
+        val removedCount = expiredPrewarmedContainer.size
+        MetricEmitter.emitHistogramMetric(LoggingMarkers.CONTAINER_POOL_EXPIRED(memory.toString, kind), removedCount)

Review comment:
       updated accordingly

##########
File path: tests/src/test/scala/org/apache/openwhisk/core/containerpool/test/ContainerPoolTests.scala
##########
@@ -753,14 +758,211 @@ class ContainerPoolTests
     containers(1).expectMsg(Start(exec, memoryLimit))
 
     //removing 2 prewarm containers will start 2 containers via backfill
-    containers(0).send(pool, ContainerRemoved)
-    containers(1).send(pool, ContainerRemoved)
+    containers(0).send(pool, ContainerRemoved(true))
+    containers(1).send(pool, ContainerRemoved(true))
     containers(2).expectMsg(Start(exec, memoryLimit))
     containers(3).expectMsg(Start(exec, memoryLimit))
     //make sure extra prewarms are not started
     containers(4).expectNoMessage(100.milliseconds)
     containers(5).expectNoMessage(100.milliseconds)
   }
+
+  it should "remove the prewarmed container after ttl time if unused" in {
+    val (containers, factory) = testContainers(2)
+    val feed = TestProbe()
+
+    val reactive: Option[ReactivePrewarmingConfig] = Some(ReactivePrewarmingConfig(0, 2, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(2, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    stream.reset()
+
+    // Make sure prewarmed containers can be deleted from prewarmedPool due to unused
+    Thread.sleep(3.seconds.toMillis)

Review comment:
       updated accordingly

##########
File path: tests/src/test/scala/org/apache/openwhisk/core/containerpool/test/ContainerPoolTests.scala
##########
@@ -67,6 +68,9 @@ class ContainerPoolTests
   // the values is done properly.
   val exec = CodeExecAsString(RuntimeManifest("actionKind", ImageName("testImage")), "testCode", None)
   val memoryLimit = 256.MB
+  val ttl = FiniteDuration(2, TimeUnit.SECONDS)

Review comment:
       Great! updated accordingly




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r414852125



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -502,9 +608,16 @@ object ContainerPool {
   def props(factory: ActorRefFactory => ActorRef,
             poolConfig: ContainerPoolConfig,
             feed: ActorRef,
-            prewarmConfig: List[PrewarmingConfig] = List.empty) =
+            prewarmConfig: List[PrewarmingConfig] = List.empty)(implicit logging: Logging) =
     Props(new ContainerPool(factory, feed, prewarmConfig, poolConfig))
 }
 
 /** Contains settings needed to perform container prewarming. */
-case class PrewarmingConfig(count: Int, exec: CodeExec[_], memoryLimit: ByteSize)
+case class PrewarmingConfig(initialCount: Int,
+                            minCount: Int,
+                            maxCount: Int,
+                            exec: CodeExec[_],
+                            memoryLimit: ByteSize,
+                            ttl: Duration,
+                            threshold: Int,
+                            increment: Int)

Review comment:
       I think it may be good to separate the dynamic affects into a separate config, which is disabled by default so that operators need to opt-in to this new behavior. e.g. 
   ```
   case class PrewarmingConfig(initialCount: Int,
                               exec: CodeExec[_],
                               memoryLimit: ByteSize,
                               reactiveConfig: Option[ReactivePrewarmingConfig]=None)
   
   case class ReactivePrewarmingConfig(minCount: Int,
                               maxCount: Int,
                               ttl: Duration,
                               threshold: Int,
                               increment: Int)
   ```
   
   Add some comments to indicate that the relationship of `threshold` and `increment` is; i.e. how do they determine the number of prewarms created and when.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r425485899



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -303,26 +326,94 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
   }
 
   /** Install prewarm containers up to the configured requirements for each kind/memory combination. */
-  def backfillPrewarms(init: Boolean) = {
+  def adjustPrewarmedContainer(init: Boolean, scheduled: Boolean): Unit = {
     prewarmConfig.foreach { config =>
+      // Delete unused prewarmed container until minCount is reached
       val kind = config.exec.kind
       val memory = config.memoryLimit
-      val currentCount = prewarmedPool.count {
-        case (_, PreWarmedData(_, `kind`, `memory`, _)) => true //done starting
-        case _                                          => false //started but not finished starting
+
+      val runningCount = prewarmedPool.count {
+        // done starting, and not expired
+        case (_, p @ PreWarmedData(_, `kind`, `memory`, _, _)) if !p.isExpired() => true
+        // started but not finished starting (or expired)
+        case _ => false
       }
       val startingCount = prewarmStartingPool.count(p => p._2._1 == kind && p._2._2 == memory)
-      val containerCount = currentCount + startingCount
-      if (containerCount < config.count) {
+      val currentCount = runningCount + startingCount
+
+      logging.info(
+        this,
+        s"[kind: ${kind} memory: ${memory.toString}] currentCount: ${currentCount} prewarmed container")
+
+      // determine how many are needed
+      val desiredCount: Int =
+        if (init) config.initialCount
+        else {
+          if (scheduled) {
+            // scheduled/reactive config backfill
+            config.reactive
+              .map(c => getReactiveCold(c, kind, memory).getOrElse(c.minCount)) //reactive -> desired is either cold start driven, or minCount
+              .getOrElse(config.initialCount) //not reactive -> desired is always initial count
+          } else {
+            // normal backfill after removal - make sure at least minCount or initialCount is started
+            config.reactive.map(_.minCount).getOrElse(config.initialCount)
+          }
+        }
+
+      logging.info(
+        this,
+        s"[kind: ${kind} memory: ${memory.toString}] needs ${desiredCount} desired prewarmed container")
+
+      // remove expired
+      config.reactive.foreach { _ =>
+        val expiredPrewarmedContainer = prewarmedPool
+          .filter { warmInfo =>
+            warmInfo match {
+              case (_, p @ PreWarmedData(_, `kind`, `memory`, _, _)) if p.isExpired() => true
+              case _                                                                  => false
+            }
+          }
+        // emit expired container counter metric with memory + kind
+        val removedCount = expiredPrewarmedContainer.size
+        MetricEmitter.emitHistogramMetric(LoggingMarkers.CONTAINER_POOL_EXPIRED(memory.toString, kind), removedCount)

Review comment:
       Also, these metrics should be counters not histograms, I think.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r414346665



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerProxy.scala
##########
@@ -146,7 +146,7 @@ case class PreWarmedData(override val container: Container,
                          kind: String,
                          override val memoryLimit: ByteSize,
                          override val activeActivationCount: Int = 0)
-    extends ContainerStarted(container, Instant.EPOCH, memoryLimit, activeActivationCount)
+    extends ContainerStarted(container, Instant.now(), memoryLimit, activeActivationCount)

Review comment:
       Updated accordingly




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r414347772



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -80,8 +87,58 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
   //periodically emit metrics (don't need to do this for each message!)
   context.system.scheduler.schedule(30.seconds, 10.seconds, self, EmitMetrics)
 
+  // Key is ColdStartKey, value is the number of cold Start in minute
+  var coldStartCount = immutable.Map.empty[ColdStartKey, Int]
+
   backfillPrewarms(true)
 
+  // check periodically every 1 minute, adjust prewarmed container(delete if unused for some time and create some increment containers)
+  context.system.scheduler.schedule(1.minute, 1.minute, self, AdjustPrewarmedContainer)
+
+  def adjustPrewarmedContainer(): Unit = {
+    prewarmConfig.foreach { config =>
+      // Delete unused prewarmed container until minCount is reached
+      val kind = config.exec.kind
+      val memory = config.memoryLimit
+      val ttlSeconds = config.ttl.toSeconds
+      val minCount = config.minCount
+      val containers = prewarmedPool.filter { warmInfo =>
+        warmInfo match {
+          case (_, PreWarmedData(_, `kind`, `memory`, _)) => true
+          case _                                          => false
+        }
+      }
+      var currentCount = containers.size
+      for ((container, data) <- containers) {
+        if (currentCount > minCount && JDuration
+              .between(data.lastUsed, Instant.now)
+              .compareTo(JDuration.ofSeconds(ttlSeconds)) > 0) {
+          container ! RemovePreWarmedContainer
+          currentCount -= 1
+        }
+      }
+
+      // supplement some prewarmed container if cold start happened
+      val calendar = Calendar.getInstance()
+      calendar.add(Calendar.MINUTE, -1)
+      val lastDate = new SimpleDateFormat("yyyy-MM-dd-HH:mm").format(calendar.getTime)
+      val coldStartKey = ColdStartKey(kind, memory.toMB, lastDate)
+      coldStartCount.get(coldStartKey) match {
+        case Some(value) =>
+          if (value >= config.threshold) {
+            val createdCount = value / config.increment
+            val count = if (createdCount > 0) createdCount else 1
+            prewarmContainerIfpossible(kind, memory, count)
+          } else {
+            // at lease create 1 prewarmed container
+            prewarmContainerIfpossible(kind, memory, 1)
+          }
+        case None =>
+      }
+      coldStartCount = coldStartCount - coldStartKey

Review comment:
       Yes, reset the map's key of kind/memory to empty.

##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -340,6 +418,45 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
     newContainer ! Start(exec, memoryLimit)
   }
 
+  /** Create a new prewarm container if currentCount doesn't reach maxCount */
+  def prewarmContainerIfpossible(kind: String, memoryLimit: ByteSize, count: Int): Unit = {
+    prewarmConfig
+      .filter { config =>
+        kind == config.exec.kind && memoryLimit == config.memoryLimit
+      }
+      .foreach { config =>
+        val currentCount = prewarmedPool.count {
+          case (_, PreWarmedData(_, `kind`, `memoryLimit`, _)) => true //done starting
+          case _                                               => false //started but not finished starting
+        }
+        val startingCount = prewarmStartingPool.count(p => p._2._1 == kind && p._2._2 == memoryLimit)
+        val containerCount = currentCount + startingCount
+        if (containerCount < config.maxCount) {
+          val createNumber = if (config.maxCount - containerCount > count) count else config.maxCount - containerCount
+          logging.info(this, s"add ${createNumber} [kind: ${kind} memory: ${memoryLimit}] prewarmed containers")
+          1 to createNumber foreach { _ =>
+            prewarmContainer(config.exec, config.memoryLimit)
+          }
+        }
+      }
+  }
+
+  /** statistics the cold start */
+  def incrementColdStartCount(kind: String, memoryLimit: ByteSize): Unit = {
+    prewarmConfig

Review comment:
       Updated accordingly




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r413648618



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -313,12 +391,12 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
       }
       val startingCount = prewarmStartingPool.count(p => p._2._1 == kind && p._2._2 == memory)
       val containerCount = currentCount + startingCount
-      if (containerCount < config.count) {
+      if (containerCount < config.initialCount) {

Review comment:
       Already added




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r414030371



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -307,18 +384,19 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
     prewarmConfig.foreach { config =>
       val kind = config.exec.kind
       val memory = config.memoryLimit
+      val initialCount = config.initialCount
       val currentCount = prewarmedPool.count {
         case (_, PreWarmedData(_, `kind`, `memory`, _)) => true //done starting
         case _                                          => false //started but not finished starting
       }
       val startingCount = prewarmStartingPool.count(p => p._2._1 == kind && p._2._2 == memory)
       val containerCount = currentCount + startingCount
-      if (containerCount < config.count) {
+      if (containerCount < initialCount) {

Review comment:
       I think this should be something like:
   ```
   val requiredCount = if (init) { initialCount } else {minCount}
   if (containerCount < requiredCount) {
   ```
   Otherwise if min is 0 and initial is 2, we can never scale down to 0.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r425485379



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -303,26 +326,94 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
   }
 
   /** Install prewarm containers up to the configured requirements for each kind/memory combination. */
-  def backfillPrewarms(init: Boolean) = {
+  def adjustPrewarmedContainer(init: Boolean, scheduled: Boolean): Unit = {
     prewarmConfig.foreach { config =>
+      // Delete unused prewarmed container until minCount is reached
       val kind = config.exec.kind
       val memory = config.memoryLimit
-      val currentCount = prewarmedPool.count {
-        case (_, PreWarmedData(_, `kind`, `memory`, _)) => true //done starting
-        case _                                          => false //started but not finished starting
+
+      val runningCount = prewarmedPool.count {
+        // done starting, and not expired
+        case (_, p @ PreWarmedData(_, `kind`, `memory`, _, _)) if !p.isExpired() => true
+        // started but not finished starting (or expired)
+        case _ => false
       }
       val startingCount = prewarmStartingPool.count(p => p._2._1 == kind && p._2._2 == memory)
-      val containerCount = currentCount + startingCount
-      if (containerCount < config.count) {
+      val currentCount = runningCount + startingCount
+
+      logging.info(
+        this,
+        s"[kind: ${kind} memory: ${memory.toString}] currentCount: ${currentCount} prewarmed container")
+
+      // determine how many are needed
+      val desiredCount: Int =
+        if (init) config.initialCount
+        else {
+          if (scheduled) {
+            // scheduled/reactive config backfill
+            config.reactive
+              .map(c => getReactiveCold(c, kind, memory).getOrElse(c.minCount)) //reactive -> desired is either cold start driven, or minCount
+              .getOrElse(config.initialCount) //not reactive -> desired is always initial count
+          } else {
+            // normal backfill after removal - make sure at least minCount or initialCount is started
+            config.reactive.map(_.minCount).getOrElse(config.initialCount)
+          }
+        }
+
+      logging.info(
+        this,
+        s"[kind: ${kind} memory: ${memory.toString}] needs ${desiredCount} desired prewarmed container")
+
+      // remove expired
+      config.reactive.foreach { _ =>
+        val expiredPrewarmedContainer = prewarmedPool
+          .filter { warmInfo =>
+            warmInfo match {
+              case (_, p @ PreWarmedData(_, `kind`, `memory`, _, _)) if p.isExpired() => true
+              case _                                                                  => false
+            }
+          }
+        // emit expired container counter metric with memory + kind
+        val removedCount = expiredPrewarmedContainer.size
+        MetricEmitter.emitHistogramMetric(LoggingMarkers.CONTAINER_POOL_EXPIRED(memory.toString, kind), removedCount)
         logging.info(
           this,
-          s"found ${currentCount} started and ${startingCount} starting; ${if (init) "initing" else "backfilling"} ${config.count - containerCount} pre-warms to desired count: ${config.count} for kind:${config.exec.kind} mem:${config.memoryLimit.toString}")(
+          s"[kind: ${kind} memory: ${memory.toString}] removed ${removedCount} expired prewarmed container")
+        expiredPrewarmedContainer.map(_._1 ! Remove)
+      }
+
+      if (currentCount < desiredCount) {
+        logging.info(
+          this,
+          s"found ${currentCount} started and ${startingCount} starting; ${if (init) "initing" else "backfilling"} ${desiredCount - currentCount} pre-warms to desired count: ${desiredCount} for kind:${config.exec.kind} mem:${config.memoryLimit.toString}")(
           TransactionId.invokerWarmup)
-        (containerCount until config.count).foreach { _ =>
-          prewarmContainer(config.exec, config.memoryLimit)
+        (currentCount until desiredCount).foreach { _ =>
+          prewarmContainer(config.exec, config.memoryLimit, config.reactive.map(_.ttl))
         }
       }
     }
+    if (scheduled) {
+      // emit cold start counter metric with memory + kind
+      coldStartCount foreach { coldStart =>
+        val coldStartKey = coldStart._1
+        val coldStartValue = coldStart._2
+        MetricEmitter.emitHistogramMetric(
+          LoggingMarkers.CONTAINER_POOL_COLDSTART(coldStartKey.memory.toString, coldStartKey.kind),
+          coldStartValue)

Review comment:
       I would name this metric "prewarmColdstart" - to not confuse it with containerStart (which has a tag for containerState). These values are not just cold starts - but cold starts that matched prewarm configs (and did not have a prewarm available)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r426734713



##########
File path: tests/src/test/scala/org/apache/openwhisk/core/containerpool/test/ContainerPoolTests.scala
##########
@@ -753,14 +758,211 @@ class ContainerPoolTests
     containers(1).expectMsg(Start(exec, memoryLimit))
 
     //removing 2 prewarm containers will start 2 containers via backfill
-    containers(0).send(pool, ContainerRemoved)
-    containers(1).send(pool, ContainerRemoved)
+    containers(0).send(pool, ContainerRemoved(true))
+    containers(1).send(pool, ContainerRemoved(true))
     containers(2).expectMsg(Start(exec, memoryLimit))
     containers(3).expectMsg(Start(exec, memoryLimit))
     //make sure extra prewarms are not started
     containers(4).expectNoMessage(100.milliseconds)
     containers(5).expectNoMessage(100.milliseconds)
   }
+
+  it should "remove the prewarmed container after ttl time if unused" in {
+    val (containers, factory) = testContainers(2)
+    val feed = TestProbe()
+
+    val reactive: Option[ReactivePrewarmingConfig] = Some(ReactivePrewarmingConfig(0, 2, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(2, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    stream.reset()
+
+    // Make sure prewarmed containers can be deleted from prewarmedPool due to unused
+    Thread.sleep(3.seconds.toMillis)
+
+    pool ! AdjustPrewarmedContainer
+    containers(0).expectMsg(Remove)
+    containers(1).expectMsg(Remove)
+    containers(0).send(pool, ContainerRemoved(false))
+    containers(1).send(pool, ContainerRemoved(false))
+
+    // Make sure prewarmed containers are deleted
+    Thread.sleep(2.seconds.toMillis)
+
+    stream.toString should include("prewarmed container is deleted")
+    stream.reset()
+  }
+
+  it should "adjust prewarm container run well without reactive config" in {
+    val (containers, factory) = testContainers(4)
+    val feed = TestProbe()
+
+    stream.reset()
+    val initialCount = 2
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(initialCount, exec, memoryLimit))))
+    containers(0).expectMsg(Start(exec, memoryLimit))
+    containers(1).expectMsg(Start(exec, memoryLimit))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind)))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind)))
+
+    // when invoker starts, include 0 prewarm container at the very beginning
+    stream.toString should include(s"currentCount: 0 prewarmed container")
+
+    // the desiredCount should equal with initialCount when invoker starts
+    stream.toString should include(s"needs ${initialCount} desired prewarmed container")
+
+    stream.reset()
+    pool ! AdjustPrewarmedContainer
+
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)
+
+    // Because already supplemented the prewarmed container, so currentCount should equal with initialCount
+    stream.toString should include(s"currentCount: ${initialCount} prewarmed container")
+  }
+
+  it should "adjust prewarm container run well with reactive config" in {
+    val (containers, factory) = testContainers(15)
+    val feed = TestProbe()
+
+    stream.reset()
+    val minCount = 0
+    val initialCount = 2
+    val maxCount = 4
+    val deadline: Option[Deadline] = Some(ttl.fromNow)
+    val reactive: Option[ReactivePrewarmingConfig] =
+      Some(ReactivePrewarmingConfig(minCount, maxCount, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 8),
+            feed.ref,
+            List(PrewarmingConfig(initialCount, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+
+    // when invoker starts, include 0 prewarm container at the very beginning
+    stream.toString should include(s"currentCount: 0 prewarmed container")
+
+    // the desiredCount should equal with initialCount when invoker starts
+    stream.toString should include(s"needs ${initialCount} desired prewarmed container")
+
+    stream.reset()
+
+    // Make sure the created prewarmed containers are expired
+    Thread.sleep(ttl.toMillis)
+    pool ! AdjustPrewarmedContainer
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)
+
+    containers(0).expectMsg(Remove)
+    containers(1).expectMsg(Remove)
+    containers(0).send(pool, ContainerRemoved(false))
+    containers(1).send(pool, ContainerRemoved(false))
+
+    // Because already supplemented the prewarmed container, so currentCount should equal with initialCount
+    stream.toString should include(s"currentCount: ${initialCount} prewarmed container")
+
+    // the desiredCount should equal with minCount because cold start didn't happen
+    stream.toString should include(s"needs ${minCount} desired prewarmed container")
+    // Previously created prewarmed containers should be removed
+    stream.toString should include(s"removed ${initialCount} expired prewarmed container")
+
+    stream.reset()
+    val action = ExecutableWhiskAction(
+      EntityPath("actionSpace"),
+      EntityName("actionName"),
+      exec,
+      limits = ActionLimits(memory = MemoryLimit(memoryLimit)))
+    val run = createRunMessage(action, invocationNamespace)
+    // 2 code start happened
+    pool ! run
+    pool ! run
+    containers(2).expectMsg(run)
+    containers(3).expectMsg(run)
+
+    pool ! AdjustPrewarmedContainer
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)
+
+    // Because already removed expired prewarmed containrs, so currentCount should equal with 0
+    stream.toString should include(s"currentCount: 0 prewarmed container")
+    // the desiredCount should equal with 2 due to cold start happened
+    stream.toString should include(s"needs 2 desired prewarmed container")
+
+    containers(4).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(5).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(4).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+    containers(5).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+
+    // Make sure the created prewarmed containers are expired
+    stream.reset()
+    Thread.sleep(ttl.toMillis)
+    pool ! AdjustPrewarmedContainer

Review comment:
       But the test is written in a way suggesting that the `AdjustPrewarmedContainer ` message is explicitly sent by logic, but it never is - it is only ever sent via schedule. So it would be more clear to rely on the timing (with a shorter value) and see that the results of the message occur without interference of the test logic.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r417018419



##########
File path: ansible/files/runtimes.json
##########
@@ -57,8 +57,15 @@
                 },
                 "stemCells": [
                     {
-                        "count": 2,
-                        "memory": "256 MB"
+                        "initialCount": 2,

Review comment:
       @rabbah ,i think we can change `count` to `initialCount` directly,
   
   And your said problem: `renaming the field would be a breaking change`, i think it is not a problem, because this pr's all changes in invoker side only, after applied this pr to new invoker, has no bad influences on old invokers and other components(e.g. controller).
   in spite of controller has `runtime` info, it just has only one effect: print the detail runtime info: https://github.com/apache/openwhisk/blob/master/core/controller/src/main/scala/org/apache/openwhisk/core/controller/Controller.scala#L222
   
   BTW, regrading `xxx.get(initialCount).orElse(count)`
   What's `xxx` here? and `.get` is wroted where?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris merged pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris merged pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r420421547



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -363,7 +480,12 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
           // Create a new prewarm container
           // NOTE: prewarming ignores the action code in exec, but this is dangerous as the field is accessible to the
           // factory
-          prewarmContainer(action.exec, memory)
+          val ttl = data.expires match {
+            case Some(deadline) =>
+              Some(deadline.time)
+            case None => None
+          }
+          prewarmContainer(action.exec, memory, ttl)

Review comment:
       I see never mind, I guess `deadline.time` is same as configured ttl?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] style95 commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
style95 commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r415747873



##########
File path: ansible/files/runtimes.json
##########
@@ -57,8 +57,15 @@
                 },
                 "stemCells": [
                     {
-                        "count": 2,
-                        "memory": "256 MB"
+                        "initialCount": 2,

Review comment:
       I think this would be a breaking change.
   How about sharing this to the dev list as well?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r415528800



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -502,9 +608,16 @@ object ContainerPool {
   def props(factory: ActorRefFactory => ActorRef,
             poolConfig: ContainerPoolConfig,
             feed: ActorRef,
-            prewarmConfig: List[PrewarmingConfig] = List.empty) =
+            prewarmConfig: List[PrewarmingConfig] = List.empty)(implicit logging: Logging) =
     Props(new ContainerPool(factory, feed, prewarmConfig, poolConfig))
 }
 
 /** Contains settings needed to perform container prewarming. */
-case class PrewarmingConfig(count: Int, exec: CodeExec[_], memoryLimit: ByteSize)
+case class PrewarmingConfig(initialCount: Int,
+                            minCount: Int,
+                            maxCount: Int,
+                            exec: CodeExec[_],
+                            memoryLimit: ByteSize,
+                            ttl: Duration,
+                            threshold: Int,
+                            increment: Int)

Review comment:
       Great suggestion, updated accordingly.

##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -80,8 +85,55 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
   //periodically emit metrics (don't need to do this for each message!)
   context.system.scheduler.schedule(30.seconds, 10.seconds, self, EmitMetrics)
 
+  // Key is ColdStartKey, value is the number of cold Start in minute
+  var coldStartCount = immutable.Map.empty[ColdStartKey, Int]
+
   backfillPrewarms(true)
 
+  // check periodically every 1 minute, adjust prewarmed container(delete if unused for some time and create some increment containers)
+  context.system.scheduler.schedule(1.minute, 1.minute, self, AdjustPrewarmedContainer)
+
+  def adjustPrewarmedContainer(): Unit = {
+    prewarmConfig.foreach { config =>
+      // Delete unused prewarmed container until minCount is reached
+      val kind = config.exec.kind
+      val memory = config.memoryLimit
+      val ttlSeconds = config.ttl.toSeconds
+      val minCount = config.minCount
+      val containers = prewarmedPool.filter { warmInfo =>

Review comment:
       updated accordingly.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] codecov-io edited a comment on pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
codecov-io edited a comment on pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#issuecomment-621566016


   # [Codecov](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=h1) Report
   > Merging [#4871](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=desc) into [master](https://codecov.io/gh/apache/openwhisk/commit/d495993a54d81ccbd5c28f0aa971f254722b1f9d&el=desc) will **decrease** coverage by `6.02%`.
   > The diff coverage is `92.50%`.
   
   [![Impacted file tree graph](https://codecov.io/gh/apache/openwhisk/pull/4871/graphs/tree.svg?width=650&height=150&src=pr&token=l0YmsiSAso)](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=tree)
   
   ```diff
   @@            Coverage Diff             @@
   ##           master    #4871      +/-   ##
   ==========================================
   - Coverage   83.32%   77.29%   -6.03%     
   ==========================================
     Files         200      200              
     Lines        9283     9365      +82     
     Branches      383      384       +1     
   ==========================================
   - Hits         7735     7239     -496     
   - Misses       1548     2126     +578     
   ```
   
   
   | [Impacted Files](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=tree) | Coverage Δ | |
   |---|---|---|
   | [...rg/apache/openwhisk/core/entity/ExecManifest.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29tbW9uL3NjYWxhL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvZW50aXR5L0V4ZWNNYW5pZmVzdC5zY2FsYQ==) | `94.56% <86.36%> (-2.66%)` | :arrow_down: |
   | [.../openwhisk/core/containerpool/ContainerProxy.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9pbnZva2VyL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvY29udGFpbmVycG9vbC9Db250YWluZXJQcm94eS5zY2FsYQ==) | `93.91% <86.95%> (+0.34%)` | :arrow_up: |
   | [...e/openwhisk/core/containerpool/ContainerPool.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9pbnZva2VyL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvY29udGFpbmVycG9vbC9Db250YWluZXJQb29sLnNjYWxh) | `96.68% <95.94%> (+1.02%)` | :arrow_up: |
   | [...pache/openwhisk/core/invoker/InvokerReactive.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9pbnZva2VyL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvaW52b2tlci9JbnZva2VyUmVhY3RpdmUuc2NhbGE=) | `79.64% <100.00%> (ø)` | |
   | [...core/database/cosmosdb/RxObservableImplicits.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29tbW9uL3NjYWxhL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvZGF0YWJhc2UvY29zbW9zZGIvUnhPYnNlcnZhYmxlSW1wbGljaXRzLnNjYWxh) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |
   | [...ore/database/cosmosdb/cache/CacheInvalidator.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9jb3Ntb3NkYi9jYWNoZS1pbnZhbGlkYXRvci9zcmMvbWFpbi9zY2FsYS9vcmcvYXBhY2hlL29wZW53aGlzay9jb3JlL2RhdGFiYXNlL2Nvc21vc2RiL2NhY2hlL0NhY2hlSW52YWxpZGF0b3Iuc2NhbGE=) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |
   | [...e/database/cosmosdb/cache/ChangeFeedConsumer.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9jb3Ntb3NkYi9jYWNoZS1pbnZhbGlkYXRvci9zcmMvbWFpbi9zY2FsYS9vcmcvYXBhY2hlL29wZW53aGlzay9jb3JlL2RhdGFiYXNlL2Nvc21vc2RiL2NhY2hlL0NoYW5nZUZlZWRDb25zdW1lci5zY2FsYQ==) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |
   | [...core/database/cosmosdb/CosmosDBArtifactStore.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29tbW9uL3NjYWxhL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvZGF0YWJhc2UvY29zbW9zZGIvQ29zbW9zREJBcnRpZmFjdFN0b3JlLnNjYWxh) | `0.00% <0.00%> (-96.23%)` | :arrow_down: |
   | [...sk/core/database/cosmosdb/CosmosDBViewMapper.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29tbW9uL3NjYWxhL3NyYy9tYWluL3NjYWxhL29yZy9hcGFjaGUvb3BlbndoaXNrL2NvcmUvZGF0YWJhc2UvY29zbW9zZGIvQ29zbW9zREJWaWV3TWFwcGVyLnNjYWxh) | `0.00% <0.00%> (-93.90%)` | :arrow_down: |
   | [...tabase/cosmosdb/cache/CacheInvalidatorConfig.scala](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree#diff-Y29yZS9jb3Ntb3NkYi9jYWNoZS1pbnZhbGlkYXRvci9zcmMvbWFpbi9zY2FsYS9vcmcvYXBhY2hlL29wZW53aGlzay9jb3JlL2RhdGFiYXNlL2Nvc21vc2RiL2NhY2hlL0NhY2hlSW52YWxpZGF0b3JDb25maWcuc2NhbGE=) | `0.00% <0.00%> (-92.31%)` | :arrow_down: |
   | ... and [15 more](https://codecov.io/gh/apache/openwhisk/pull/4871/diff?src=pr&el=tree-more) | |
   
   ------
   
   [Continue to review full report at Codecov](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=continue).
   > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
   > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
   > Powered by [Codecov](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=footer). Last update [d495993...215fee3](https://codecov.io/gh/apache/openwhisk/pull/4871?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r427007522



##########
File path: tests/src/test/scala/org/apache/openwhisk/core/containerpool/test/ContainerPoolTests.scala
##########
@@ -753,14 +757,102 @@ class ContainerPoolTests
     containers(1).expectMsg(Start(exec, memoryLimit))
 
     //removing 2 prewarm containers will start 2 containers via backfill
-    containers(0).send(pool, ContainerRemoved)
-    containers(1).send(pool, ContainerRemoved)
+    containers(0).send(pool, ContainerRemoved(true))
+    containers(1).send(pool, ContainerRemoved(true))
     containers(2).expectMsg(Start(exec, memoryLimit))
     containers(3).expectMsg(Start(exec, memoryLimit))
     //make sure extra prewarms are not started
     containers(4).expectNoMessage(100.milliseconds)
     containers(5).expectNoMessage(100.milliseconds)
   }
+
+  it should "remove the prewarmed container after ttl time if unused" in {
+    val (containers, factory) = testContainers(2)
+    val feed = TestProbe()
+
+    val reactive: Option[ReactivePrewarmingConfig] = Some(ReactivePrewarmingConfig(0, 2, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(2, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    stream.reset()
+
+    // Make sure prewarmed containers can be deleted from prewarmedPool due to unused
+    Thread.sleep(3.seconds.toMillis)
+
+    pool ! AdjustPrewarmedContainer
+    containers(0).expectMsg(Remove)
+    containers(1).expectMsg(Remove)
+    containers(0).send(pool, ContainerRemoved(false))
+    containers(1).send(pool, ContainerRemoved(false))
+
+    // Make sure prewarmed containers are deleted
+    Thread.sleep(2.seconds.toMillis)
+
+    stream.toString should include("prewarmed container is deleted")
+    stream.reset()
+  }
+
+  it should "supplement prewarmed container when doesn't have enough container to handle activation" in {
+    val (containers, factory) = testContainers(6)
+    val feed = TestProbe()
+
+    val deadline: Option[Deadline] = Some(ttl.fromNow)
+    val reactive: Option[ReactivePrewarmingConfig] = Some(ReactivePrewarmingConfig(0, 2, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(2, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+    stream.reset()
+
+    // Make sure prewarmed containers can be deleted from prewarmedPool due to unused
+    Thread.sleep(3.seconds.toMillis)
+
+    pool ! AdjustPrewarmedContainer

Review comment:
       I fixed 2 bugs in your codes.
   It is very good!




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] tysonnorris commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
tysonnorris commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r425492449



##########
File path: tests/src/test/scala/org/apache/openwhisk/core/containerpool/test/ContainerPoolTests.scala
##########
@@ -753,14 +758,211 @@ class ContainerPoolTests
     containers(1).expectMsg(Start(exec, memoryLimit))
 
     //removing 2 prewarm containers will start 2 containers via backfill
-    containers(0).send(pool, ContainerRemoved)
-    containers(1).send(pool, ContainerRemoved)
+    containers(0).send(pool, ContainerRemoved(true))
+    containers(1).send(pool, ContainerRemoved(true))
     containers(2).expectMsg(Start(exec, memoryLimit))
     containers(3).expectMsg(Start(exec, memoryLimit))
     //make sure extra prewarms are not started
     containers(4).expectNoMessage(100.milliseconds)
     containers(5).expectNoMessage(100.milliseconds)
   }
+
+  it should "remove the prewarmed container after ttl time if unused" in {
+    val (containers, factory) = testContainers(2)
+    val feed = TestProbe()
+
+    val reactive: Option[ReactivePrewarmingConfig] = Some(ReactivePrewarmingConfig(0, 2, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(2, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    stream.reset()
+
+    // Make sure prewarmed containers can be deleted from prewarmedPool due to unused
+    Thread.sleep(3.seconds.toMillis)
+
+    pool ! AdjustPrewarmedContainer
+    containers(0).expectMsg(Remove)
+    containers(1).expectMsg(Remove)
+    containers(0).send(pool, ContainerRemoved(false))
+    containers(1).send(pool, ContainerRemoved(false))
+
+    // Make sure prewarmed containers are deleted
+    Thread.sleep(2.seconds.toMillis)
+
+    stream.toString should include("prewarmed container is deleted")
+    stream.reset()
+  }
+
+  it should "adjust prewarm container run well without reactive config" in {
+    val (containers, factory) = testContainers(4)
+    val feed = TestProbe()
+
+    stream.reset()
+    val initialCount = 2
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(initialCount, exec, memoryLimit))))
+    containers(0).expectMsg(Start(exec, memoryLimit))
+    containers(1).expectMsg(Start(exec, memoryLimit))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind)))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind)))
+
+    // when invoker starts, include 0 prewarm container at the very beginning
+    stream.toString should include(s"currentCount: 0 prewarmed container")
+
+    // the desiredCount should equal with initialCount when invoker starts
+    stream.toString should include(s"needs ${initialCount} desired prewarmed container")
+
+    stream.reset()
+    pool ! AdjustPrewarmedContainer
+
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)
+
+    // Because already supplemented the prewarmed container, so currentCount should equal with initialCount
+    stream.toString should include(s"currentCount: ${initialCount} prewarmed container")
+  }
+
+  it should "adjust prewarm container run well with reactive config" in {
+    val (containers, factory) = testContainers(15)
+    val feed = TestProbe()
+
+    stream.reset()
+    val minCount = 0
+    val initialCount = 2
+    val maxCount = 4
+    val deadline: Option[Deadline] = Some(ttl.fromNow)
+    val reactive: Option[ReactivePrewarmingConfig] =
+      Some(ReactivePrewarmingConfig(minCount, maxCount, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 8),
+            feed.ref,
+            List(PrewarmingConfig(initialCount, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+
+    // when invoker starts, include 0 prewarm container at the very beginning
+    stream.toString should include(s"currentCount: 0 prewarmed container")
+
+    // the desiredCount should equal with initialCount when invoker starts
+    stream.toString should include(s"needs ${initialCount} desired prewarmed container")
+
+    stream.reset()
+
+    // Make sure the created prewarmed containers are expired
+    Thread.sleep(ttl.toMillis)
+    pool ! AdjustPrewarmedContainer
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)
+
+    containers(0).expectMsg(Remove)
+    containers(1).expectMsg(Remove)
+    containers(0).send(pool, ContainerRemoved(false))
+    containers(1).send(pool, ContainerRemoved(false))
+
+    // Because already supplemented the prewarmed container, so currentCount should equal with initialCount
+    stream.toString should include(s"currentCount: ${initialCount} prewarmed container")
+
+    // the desiredCount should equal with minCount because cold start didn't happen
+    stream.toString should include(s"needs ${minCount} desired prewarmed container")
+    // Previously created prewarmed containers should be removed
+    stream.toString should include(s"removed ${initialCount} expired prewarmed container")
+
+    stream.reset()
+    val action = ExecutableWhiskAction(
+      EntityPath("actionSpace"),
+      EntityName("actionName"),
+      exec,
+      limits = ActionLimits(memory = MemoryLimit(memoryLimit)))
+    val run = createRunMessage(action, invocationNamespace)
+    // 2 code start happened
+    pool ! run
+    pool ! run
+    containers(2).expectMsg(run)
+    containers(3).expectMsg(run)
+
+    pool ! AdjustPrewarmedContainer
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)
+
+    // Because already removed expired prewarmed containrs, so currentCount should equal with 0
+    stream.toString should include(s"currentCount: 0 prewarmed container")
+    // the desiredCount should equal with 2 due to cold start happened
+    stream.toString should include(s"needs 2 desired prewarmed container")
+
+    containers(4).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(5).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(4).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+    containers(5).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+
+    // Make sure the created prewarmed containers are expired
+    stream.reset()
+    Thread.sleep(ttl.toMillis)
+    pool ! AdjustPrewarmedContainer

Review comment:
       shouldn't pool send `AdjustPrewarmedContainer` to itself eventually after the `poolConfig.prewarmExpiredCheckPeriod`?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r425547006



##########
File path: tests/src/test/scala/org/apache/openwhisk/core/containerpool/test/ContainerPoolTests.scala
##########
@@ -753,14 +758,211 @@ class ContainerPoolTests
     containers(1).expectMsg(Start(exec, memoryLimit))
 
     //removing 2 prewarm containers will start 2 containers via backfill
-    containers(0).send(pool, ContainerRemoved)
-    containers(1).send(pool, ContainerRemoved)
+    containers(0).send(pool, ContainerRemoved(true))
+    containers(1).send(pool, ContainerRemoved(true))
     containers(2).expectMsg(Start(exec, memoryLimit))
     containers(3).expectMsg(Start(exec, memoryLimit))
     //make sure extra prewarms are not started
     containers(4).expectNoMessage(100.milliseconds)
     containers(5).expectNoMessage(100.milliseconds)
   }
+
+  it should "remove the prewarmed container after ttl time if unused" in {
+    val (containers, factory) = testContainers(2)
+    val feed = TestProbe()
+
+    val reactive: Option[ReactivePrewarmingConfig] = Some(ReactivePrewarmingConfig(0, 2, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(2, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = Some(ttl.fromNow))))
+    stream.reset()
+
+    // Make sure prewarmed containers can be deleted from prewarmedPool due to unused
+    Thread.sleep(3.seconds.toMillis)
+
+    pool ! AdjustPrewarmedContainer
+    containers(0).expectMsg(Remove)
+    containers(1).expectMsg(Remove)
+    containers(0).send(pool, ContainerRemoved(false))
+    containers(1).send(pool, ContainerRemoved(false))
+
+    // Make sure prewarmed containers are deleted
+    Thread.sleep(2.seconds.toMillis)
+
+    stream.toString should include("prewarmed container is deleted")
+    stream.reset()
+  }
+
+  it should "adjust prewarm container run well without reactive config" in {
+    val (containers, factory) = testContainers(4)
+    val feed = TestProbe()
+
+    stream.reset()
+    val initialCount = 2
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 4),
+            feed.ref,
+            List(PrewarmingConfig(initialCount, exec, memoryLimit))))
+    containers(0).expectMsg(Start(exec, memoryLimit))
+    containers(1).expectMsg(Start(exec, memoryLimit))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind)))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind)))
+
+    // when invoker starts, include 0 prewarm container at the very beginning
+    stream.toString should include(s"currentCount: 0 prewarmed container")
+
+    // the desiredCount should equal with initialCount when invoker starts
+    stream.toString should include(s"needs ${initialCount} desired prewarmed container")
+
+    stream.reset()
+    pool ! AdjustPrewarmedContainer
+
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)
+
+    // Because already supplemented the prewarmed container, so currentCount should equal with initialCount
+    stream.toString should include(s"currentCount: ${initialCount} prewarmed container")
+  }
+
+  it should "adjust prewarm container run well with reactive config" in {
+    val (containers, factory) = testContainers(15)
+    val feed = TestProbe()
+
+    stream.reset()
+    val minCount = 0
+    val initialCount = 2
+    val maxCount = 4
+    val deadline: Option[Deadline] = Some(ttl.fromNow)
+    val reactive: Option[ReactivePrewarmingConfig] =
+      Some(ReactivePrewarmingConfig(minCount, maxCount, ttl, threshold, increment))
+    val pool =
+      system.actorOf(
+        ContainerPool
+          .props(
+            factory,
+            poolConfig(MemoryLimit.STD_MEMORY * 8),
+            feed.ref,
+            List(PrewarmingConfig(initialCount, exec, memoryLimit, reactive))))
+    containers(0).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(1).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(0).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+    containers(1).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+
+    // when invoker starts, include 0 prewarm container at the very beginning
+    stream.toString should include(s"currentCount: 0 prewarmed container")
+
+    // the desiredCount should equal with initialCount when invoker starts
+    stream.toString should include(s"needs ${initialCount} desired prewarmed container")
+
+    stream.reset()
+
+    // Make sure the created prewarmed containers are expired
+    Thread.sleep(ttl.toMillis)
+    pool ! AdjustPrewarmedContainer
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)
+
+    containers(0).expectMsg(Remove)
+    containers(1).expectMsg(Remove)
+    containers(0).send(pool, ContainerRemoved(false))
+    containers(1).send(pool, ContainerRemoved(false))
+
+    // Because already supplemented the prewarmed container, so currentCount should equal with initialCount
+    stream.toString should include(s"currentCount: ${initialCount} prewarmed container")
+
+    // the desiredCount should equal with minCount because cold start didn't happen
+    stream.toString should include(s"needs ${minCount} desired prewarmed container")
+    // Previously created prewarmed containers should be removed
+    stream.toString should include(s"removed ${initialCount} expired prewarmed container")
+
+    stream.reset()
+    val action = ExecutableWhiskAction(
+      EntityPath("actionSpace"),
+      EntityName("actionName"),
+      exec,
+      limits = ActionLimits(memory = MemoryLimit(memoryLimit)))
+    val run = createRunMessage(action, invocationNamespace)
+    // 2 code start happened
+    pool ! run
+    pool ! run
+    containers(2).expectMsg(run)
+    containers(3).expectMsg(run)
+
+    pool ! AdjustPrewarmedContainer
+    // Make sure adjustPrewarmContainer run finished
+    Thread.sleep(2.seconds.toMillis)
+
+    // Because already removed expired prewarmed containrs, so currentCount should equal with 0
+    stream.toString should include(s"currentCount: 0 prewarmed container")
+    // the desiredCount should equal with 2 due to cold start happened
+    stream.toString should include(s"needs 2 desired prewarmed container")
+
+    containers(4).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(5).expectMsg(Start(exec, memoryLimit, Some(ttl)))
+    containers(4).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+    containers(5).send(pool, NeedWork(preWarmedData(exec.kind, expires = deadline)))
+
+    // Make sure the created prewarmed containers are expired
+    stream.reset()
+    Thread.sleep(ttl.toMillis)
+    pool ! AdjustPrewarmedContainer

Review comment:
       I think need, bacause the prewarmExpirationCheckInterval is `1 minute` 
   ```
     def poolConfig(userMemory: ByteSize) =
       ContainerPoolConfig(userMemory, 0.5, false, FiniteDuration(1, TimeUnit.MINUTES))
   ```
   so need to send AdjustPrewarmedContainer  to itself manually in codes.
   Why i don't change above value to a less value, the reason is `i want to make user's to know the detail clearly when see the test codes`




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [openwhisk] ningyougang commented on a change in pull request #4871: Adjust prewarm container dynamically

Posted by GitBox <gi...@apache.org>.
ningyougang commented on a change in pull request #4871:
URL: https://github.com/apache/openwhisk/pull/4871#discussion_r415528640



##########
File path: core/invoker/src/main/scala/org/apache/openwhisk/core/containerpool/ContainerPool.scala
##########
@@ -80,8 +85,55 @@ class ContainerPool(childFactory: ActorRefFactory => ActorRef,
   //periodically emit metrics (don't need to do this for each message!)
   context.system.scheduler.schedule(30.seconds, 10.seconds, self, EmitMetrics)
 
+  // Key is ColdStartKey, value is the number of cold Start in minute
+  var coldStartCount = immutable.Map.empty[ColdStartKey, Int]
+
   backfillPrewarms(true)
 
+  // check periodically every 1 minute, adjust prewarmed container(delete if unused for some time and create some increment containers)
+  context.system.scheduler.schedule(1.minute, 1.minute, self, AdjustPrewarmedContainer)
+
+  def adjustPrewarmedContainer(): Unit = {
+    prewarmConfig.foreach { config =>
+      // Delete unused prewarmed container until minCount is reached
+      val kind = config.exec.kind
+      val memory = config.memoryLimit
+      val ttlSeconds = config.ttl.toSeconds
+      val minCount = config.minCount
+      val containers = prewarmedPool.filter { warmInfo =>
+        warmInfo match {
+          case (_, PreWarmedData(_, `kind`, `memory`, _)) => true
+          case _                                          => false
+        }
+      }
+      var currentCount = containers.size
+      for ((container, data) <- containers) {
+        if (currentCount > minCount && JDuration
+              .between(data.created, Instant.now)
+              .compareTo(JDuration.ofSeconds(ttlSeconds)) > 0) {
+          container ! Remove
+          currentCount -= 1
+        }
+      }
+
+      // supplement some prewarmed container if cold start happened
+      val coldStartKey = ColdStartKey(kind, memory)
+      coldStartCount.get(coldStartKey) match {
+        case Some(value) =>
+          if (value >= config.threshold) {
+            val createdCount = value / config.increment
+            val count = if (createdCount > 0) createdCount else 1
+            prewarmContainerIfpossible(kind, memory, count)
+          } else {
+            // at lease create 1 prewarmed container
+            prewarmContainerIfpossible(kind, memory, 1)

Review comment:
       Updated accordingly, you can check it again ^^




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org