It's impossible to continue Pipeline work after falls even when reason not exists already

Environment

  1. External system sends files to S3
  2. SingleStore pipeline extracts, transforms and stores data

Problem
External system sometimes removes some exported files from S3.
Pipeline try extract such file which not exists already and after several retryes goes to the state “Error”.
After restart the same pipeline try extract the same file again and again goes to the state “Error”.
User root not has permissions to delete error related data since it located in information schema.

To continue use the same pipeline we must to create and to send to S3 missed file with same structure but without data.
More simple but very heavy way is to drop pipeline and all processed files on S3, then to recreate pipeline.

Exists way to continue pipeline with ignoring previous errors (e.g., bad files) ?
Files are .csv.gz, target of pipeline is procedure, SKIP PARSER ERRORS not helps, SKIP/IGNORE ALL ERRORS not allowed

ALTER PIPELINE DROP ORPHAN FILES (ALTER PIPELINE · SingleStore Documentation) should work as an easier, but still manual, way to recover from that situation. It’ll clear metadata for unloaded files, triggering a rescan of the bucket - the end result will be that we’ll attempt to load only those files that exist at the time of the alter.

Setting @@pipelines_stop_on_error to false will likely make the pipeline automatically skip the problem file in this case, without going into the error state. It’s a bit of a heavy hammer, however, applying for all pipelines and for errors besides this particular one.

Thank you very much,
By docs it command works even better:

The pipeline will not try to load these files again unless they reappear in the source. Use this command to instruct a pipeline that some files in the source have been removed, and to not try to load them. This command will not forget metadata associated with already Loaded or Skipped files; SingleStore will not try to reload such files.