We have number of transformation jobs which are being triggered using an iterator, I want to track the running status of each job to decide the next run in case any failure occurs I do not want to run the Transformation jobs which were got succeeded in recent run to reduce the run time of the job.
The most flexible and robust method I have used is to write status updates back to a table after each (in or case) transformation execution. We typically have a universal table that allows to write back the project/job/status/last update timestamp. We also have some other columns but those are the main ones. We then leverage the status from the table to know which jobs need to be ran versus the jobs that don't. It's a simplistic but very robust method of trimming down job run times. I hope this helps.
I just thought of another last ditch method that I know will work but should be used as a last resort. You could use Python and the "requests" module to do the same cursor based paging where you append each page to the previous. When you are done looping through the pages you should have all the json. At that point you can take a look at the output of all results and even do some json checks to make sure it's valid. Fundamentally, this is what Matillion is doing under the hood of the API Profile and Extract components. They just make it way easier in the UI rather than having to write it all by hand. So, like I said, this would be a last resort but is definitely doable and would work.