Hi Rob-Hindman
I have configured Any-node Fail-over Clustered based tasks successfully and the tasks are getting switched between the nodes after a successful fail-over of either of the nodes.
My Working Scenario:
I have set a task to trigger at 4.25 PM and once the task triggers , the status of the task is updated as running, the task gets executed successfully and the same can be traced under History of the Task in the Task Scheduler and the status of the task gets reset to 'Ready' once task execution is successful. The Task scheduled performs as expected. This is in case where the Task are not switched/a fail-over is happening when execution is in progress.
Now, let us consider the same scenario with a different approach. The Same task which is getting triggered at 4.25 PM continues to run and let us consider the server(Node 1) goes down at 4.26 PM and the tasks gets switched to Node 2 and the Task which got triggered at 4.25 PM in Node 1 doesn't seem to continue executing the task after moving to Node 2. Instead it is seen that the Task is in Ready state waiting for the trigger condition/actions to trigger it further.
Expectation: the Task should continue to run/execute in the other node once the owner node on which it was running is down, rather than getting switched to the next available node and waiting for the trigger action to be triggered.
Is this a behavioral outcome of the Fail-over Clustered Based Tasks or Is there a fix for this issue?
Regards
Prasanna Kumaran R