You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@sling.apache.org by jo...@apache.org on 2022/06/13 14:14:16 UTC

[sling-site] 01/01: mention the necessity to unschedule scheduled jobs

This is an automated email from the ASF dual-hosted git repository.

joerghoh pushed a commit to branch eventing-unschedule-scheduled-jobs
in repository https://gitbox.apache.org/repos/asf/sling-site.git

commit 40d30f6c79ac6f260047c5f4f0fbf8579da759d0
Author: Joerg Hoh <jh...@adobe.com>
AuthorDate: Mon Jun 13 16:14:03 2022 +0200

    mention the necessity to unschedule scheduled jobs
---
 .../apache-sling-eventing-and-job-handling.md      | 60 ++++++++++++++--------
 1 file changed, 38 insertions(+), 22 deletions(-)

diff --git a/src/main/jbake/content/documentation/bundles/apache-sling-eventing-and-job-handling.md b/src/main/jbake/content/documentation/bundles/apache-sling-eventing-and-job-handling.md
index 4c9281c31..680f62aae 100644
--- a/src/main/jbake/content/documentation/bundles/apache-sling-eventing-and-job-handling.md
+++ b/src/main/jbake/content/documentation/bundles/apache-sling-eventing-and-job-handling.md
@@ -1,4 +1,4 @@
-title=Apache Sling Eventing and Job Handling		
+title=Apache Sling Eventing and Job Handling
 type=page
 status=published
 tags=core,events
@@ -26,9 +26,9 @@ In general, the eventing mechanism (OSGi EventAdmin) has no knowledge about the
 
 On the other hand, there are use cases where the guarantee of processing is a must and usually this comes with the requirement of processing exactly once. Typical examples are sending notification emails (or sms), post processing of content (like thumbnail generation of images or documents), workflow steps etc.
 
-The Sling Event Support adds the notion of a job. A job is a special event that has to be processed exactly once. 
+The Sling Event Support adds the notion of a job. A job is a special event that has to be processed exactly once.
 To be precise, the processing guarantee is at least once. However, the time window for a single job where exactly
-once can't be guaranteed is very small. It happens if the instance which processes a job crashes after the job 
+once can't be guaranteed is very small. It happens if the instance which processes a job crashes after the job
 processing is finished but before this state is persisted. Therefore a job consumer should be prepared to process
 a job more than once. Of course, if there is no job consumer for a job, the job is never processed. However this
 is considered a deployment error.
@@ -44,20 +44,20 @@ A job consists of two parts, the job topic describing the nature of the job and
         import org.apache.felix.scr.annotations.Reference;
         import java.util.Map;
         import java.util.HashMap;
-        
+
         @Component
         public class MyComponent {
-        
+
             @Reference
             private JobManager jobManager;
-            
+
             public void startJob() {
                 final Map<String, Object> props = new HashMap<String, Object>();
                 props.put("item1", "/something");
                 props.put("count", 5);
-                
+
                 jobManager.addJob("my/special/jobtopic", props);
-            }        
+            }
         }
 
 The job topic follows the conventions for the topic of an OSGi event. All objects in the payload must be serializable and publically available (exported by a bundle). This is required as the job is persisted and unmarshalled before processing.
@@ -83,21 +83,37 @@ An example code for scheduling a job looks like this:
     @Component
     public class MyComponent {
 
+        private static final String TOPIC = "my/special/topic";
+
         @Reference
         private JobManager jobManager;
 
         public void startScheduledJob() {
-            ScheduleBuilder scheduleBuilder = jobManager.createJob("my/special/jobtopic").schedule();
-            scheduleBuilder.daily(0,0); // execute daily at midnight
-            if (scheduleBuilder.add() == null) {
+            Collection<ScheduledJobInfo> myJobs = jobManager.getScheduledJobs(TOPIC, 10, null);
+            if (myJobs.empty()) {
+              // daily invocation not yet scheduled
+              ScheduleBuilder scheduleBuilder = jobManager.createJob(TOPIC).schedule();
+              scheduleBuilder.daily(0,0); // execute daily at midnight
+              if (scheduleBuilder.add() == null) {
                 // something went wrong here, use scheduleBuilder.add(List<String>) instead to get further information about the error
+              }
             }
+          }
         }
     }
 
 
 Internally the scheduled Jobs use the [Commons Scheduler Service](/documentation/bundles/scheduler-service-commons-scheduler.html). But in addition they are persisted (by default below `/var/eventing/scheduled-jobs`) and survive therefore even server restarts. When the scheduled time is reached, the job is automatically added as regular Sling Job through the `JobManager`.
 
+**NOTE**: A scheduled job is not automatically un-scheduled, but you have to remove it when it's no longer needed.
+
+      public void stopScheduledJob() {
+          Collection<ScheduledJobInfo> myJobs = jobManager.getScheduledJobs(TOPIC, 10, null);
+          myJobs.foreach(sji -> sji.unschedule());
+      }
+
+Therefor it is best to check upfront, if the scheduling already exists. Only it does not yet exist you should register it (like shown above).
+
 
 ### Job Consumers
 
@@ -119,9 +135,9 @@ A job consumer is a service consuming and processing a job. It registers itself
             }
         }
 The consumer can either return *JobResult.OK* indicating that the job has been processed, *JobResult.FAILED* indicating the processing failed, but can be retried or *JobResult.CANCEL* the processing has failed permanently.
-   
+
 ### Job Executors
-If the job consumer needs more features like providing progress information or adding more information of the processing,*JobExecutor* should be implemented.      
+If the job consumer needs more features like providing progress information or adding more information of the processing,*JobExecutor* should be implemented.
 A job executor is a service processing a job. It registers itself as an OSGi service together with a property defining which topics this consumer can process:
 
         import org.apache.felix.scr.annotations.Component;
@@ -137,32 +153,32 @@ A job executor is a service processing a job. It registers itself as an OSGi ser
 
             public JobExecutionResult process(final Job job, JobExecutionContext context)
                 //process the job and return the result
-                
+
                 //initialize job progress with n number of steps
                 context.getJobContext().initProgress(n, -1);
                 context.getJobContext().log("Job initialized");
-                
+
                 //increament progress by 2 steps
                 context.getJobContext().incrementProgressCount(2);
                 context.getJobContext().log("2 steps completed.");
-                
+
                 //stop processing if job was cancelled
                 if(context.isStopped()) {
                     context.getJobContext().log("Job Stopped after 4 steps.");
                     return context.result().message(resultMessage).cancelled();
                 }
-                
+
                 //add job log
                 context.getJobContext().log("Job finished.");
-                
+
                 return context.result().message(resultMessage).succeeded();
             }
         }
-        
+
 *JobExecutionContext* can be used by executor to update job execution progress, add job logs, build a JobExecutionResult and to check if job is still active by jobExecutionContext.isStopped().
 The executor can return job result "succeeded" by calling JobExecutionContext.result(successMsg).succeeded(), job result "failed" by calling JobExecutionContext.result(errorMessage).failed() and  job result "cancelled" by calling JobExecutionContext.result(message).cancelled().
-The *Job* interface allows to query the topic, the result message, progress, logs, the payload and additional information about the current job. 
-     
+The *Job* interface allows to query the topic, the result message, progress, logs, the payload and additional information about the current job.
+
 ### Job Handling
 
 New jobs are first persisted in the resource tree (for failover etc.), then the job is distributed to an instance responsible for processing the job and on that instance the job is put into a processing queue. There are different types of queues defining how the jobs are processed (one after the other, in parallel etc.).
@@ -219,7 +235,7 @@ If a user action results in the creation of a job, the thread processing the use
 
 If an observation event or any other OSGi event results in the creation of a job, special care needs to be taken in a clustered installation to avoid the job is created on all cluster instances. The easiest way to avoid this, is to use the topology api and make sure the job is only created on the leader instance.
 
-  
+
 ## Distributed Events
 
 In addition to the job handling, the Sling Event support adds handling for distributed events. A distributed event is an OSGi event which is sent across JVM boundaries to a different VM. A potential use case is to broadcast information in a clustered environment.