Delay between Agent Jobs possible?

So I have 2 separate Agent Jobs on SQL Server scheduled to run sequentially every 2 hours, which means that if Job A starts right now, Job B will start one hour from now and then Job A will start one hour after that, and so on.

98% of the time, the Jobs complete within 60 min, but sometimes if one Job takes over 60 min to complete, it will overlap with the other Job and will make that Job to fail. I could make the Job interval bigger so that each Job would get more time to complete, but I don't want to do that.

So my question is: Is there any way to prevent these two jobs stop from colliding? I mean, if Job A takes over 60 min to complete, can I make Job B delay and only start after Job A completed?

Any feedback is greatly appreciated. Thank you!

One way would be to have both in the same job. Then you might need a timer that make sure it is time for the second step to run.

1 Like

It is possible to detect which jobs are currently running. Something along the lines of:if not exists ( SELECT * FROM msdb.dbo.sysjobactivity ja LEFT JOIN msdb.dbo.sysjobhistory jh ON ja.job_history_id = jh.instance_id JOIN msdb.dbo.sysjobs j ON ja.job_id = j.job_id JOIN msdb.dbo.sysjobsteps js ON ja.job_id = js.job_id AND ISNULL(ja.last_executed_step_id,0)+1 = js.step_id WHERE = N'<<Your Job Name>>' -- << << << << CHANGE THIS!!! AND ja.session_id = (SELECT TOP 1 session_id FROM msdb.dbo.syssessions ORDER BY agent_start_date DESC) AND start_execution_date is not null AND stop_execution_date is null begin <<execute what I want since the other job is not running>> endYou'll need to supply your specifics but, hopefully, this will get you started.

1 Like

Can you put the two separate jobs as two, sequential, steps within a single job - or is it important that the second job starts "on the hour, ever other hour"?

1 Like

A couple of other approaches to consider...

  1. There is a system procedure to Start a job (sp_start_job). Each job could have a final step of starting the other job. One would always be running. You could add some logic to each job to have it "wait" until the top of the hour if that is desired.
  2. It is also possible to disable and/or enable a job from T-SQL using sp_update_job. Each job could begin by disabling its counterpart and terminate by re-enabling it. This could get messy if the error handling does NOT guarantee that this last step is always performed
1 Like

Thank you all for the great responses.

They can't have the same job.

I was thinking about a storage procedure method but I wanted to hear your answers first. This confirms it :slight_smile: I will try using the solutions you gave me and think about how to implement them in my problem.

Thank you very much once again for the great responses!

You should abstract this (if at all possible). That is, job B should not "know" directly about job A (and vice versa). Instead, create tables that mark the underlying process those jobs are doing as having been completed. When job B starts, it can check the table, and if any prerequisite process(es) is not done, it can issue a WAITFOR DELAY and try again later.
For example, when job A starts it adds a row to the process table that means "I'm working on this right now". When it completes the part job B needs (even if job A is not completely finished yet itself), it updates that flag to say "It's done now". Job B simply checks the last entry in the table and see if it's "in progress" or "complete" and processes accordingly. That way, job A could switch to being job X and job Y later and it wouldn't affect job B's processing at all: it would still "know" whether it was time to run or not.

1 Like

an alternative is to use the Application Lock.Set it in Job A and have Job B checking for it.

Mladen has an article on this

1 Like

Waitfor Delay in executing T-SQL

WAITFOR DELAY '00:00:05' ---- 5 Second Delay

Waitfor Time in executing T-SQL

T-SQL runs after particular time has arrived.

/* Add 5 seconds to current time so
system waits for 5 seconds*/
SET @MyDateTime = DATEADD(s,5,GETDATE())

To know more please go this nice written blog

1 Like

If Job A "dies" doesn't that leave a "locked" state in the process table? and thus Job B doesn't run I've never come across a way that satisfactorily sorts this out, other than human intervention, and thus the risk that Job B doesn't run (consequences of which may be significant - e.g. if it is a Backup that then fails to run).

I suppose another WatchDog job could run saying "Job A still locked after more than X-hours" and start sending smoke-signals or somesuch ... still worries me a bit that something might fall down the crack and Job B doesn't run for a prolonged period. Hence why my preference would be to join them nose-to-tail in a single job.

1 Like

I still think it's poor design to hard-code such dependencies, and it's false besides. Job B is not waiting on Job A, it's waiting on something that job A is doing. A very important difference, albeit a subtle one. And what if there are two (or more) prerequisites for a job starting? How do you hard-code the line up then?? And if Job A fails hard enough that it can't even update the status, then how in your method would job B ever get started anyway?

With a abstracted approach, job B can check the time stamp on the last row and "know" if job A somehow got delayed/derailed for any reason.

1 Like

It depends on what OP wants.

So far we knows that
[1]. Job A and B cannot execute concurrently

[2]. Job A and B cannot be combined into one big job

What we don't know is that
[1] If Job A failed, should Job B execute ?

[2] If Job A execute for prolong period, what happen to Job B ?

@Kristen's WatchDog job is a good idea. :+1: When i have time, i will implement one on my servers

1 Like


[1] If Job A failed, should Job B execute ? Yes, it should still execute.

[2] If Job A execute for prolong period, what happen to Job B ? To be honest, I've never actually thought about this because I've never had Job A or Job B run for more than 1 hour and a half.

Thank you all for helping me and contributing to this topic :slight_smile:

I would just add another step to job A that kicks off job B, perhaps that job step first has a WAITFOR DELAY in it. You can have the first step continue to the next step even on step failure.

I've used this method plenty of times in the past. It's an easy way to stagger maintenance jobs.

You don't need, or want, a separate watchdog process. Avoiding that need is the point of having job A log its progress into a shared log/"progress" table. Think about it: you don't really want to have to create watchdogs for every sequence of processes you write, do you?

Any subsequent job -- even a new job created tomorrow -- that needs result(s) from a job A process can check the log/"progress" table for the process it needed from job A. Who knows, maybe next week job R will take over doing that process. Doesn't matter if you abstract it, then it's job R writing the progress row, but all subsequent tasks, in this case job B, continue to function exactly as they did before, with no code changes required. That is the point.

Likewise, if job B suddenly has 2 prerequisite processes instead of 1, it simply looks for both processes in the table instead of one, but the original processes themselves, before job B, don't have to be modified.

How do you handle Job A crashing and not updating it's status to "Finished" [i.e. without a watchdog of some sort]

We've used SQL Sentry's Performance Advisor to help with job monitoring. It track average job time and can notify you if it's running long. It was very handy for some SSIS packages that connected to Oracle to get lots of data into SQL Server. We couldn't really monitor the Oracle side, but we could monitor the Agent job that kicked off the package.

You will have a generic/standard watchdog over all jobs, but you don't have specific watchdogs for each process. The generic check should also "know" the priority / notification level of each process, which task owns/does it, and who all should be notified if it fails, and on what schedule. How simple or complex you get with that is up to you.

As to the job A process(es) in question, any subsequent job that uses that info, job B and/or other job(s), will check it. It has to, to itself "know" what to do. If job B/whatever detects a problem with an earlier process, it can send an immediate notification to the generic check, with whatever priority is needed.

If no subsequent process uses that data, then it falls back on the generic job check.

Note that I'm careful to say "process" not "job A". The logging should be task-based, not based on the current container. Say, for example, the job A process here is "daily invoice loads". Then a unique description of that process should be in the log, and that is what job B should look up, not "job A finished". Again, who knows, maybe three weeks from now some other process must also finish before "daily invoice loads" is considered complete.

Yes, this takes some thought and design, but it's actually much easier to manage and track things once it's in place. You can start very simply, with a "pass/fail" setup, and enhance it over time.