Directory Monitor Best Practices for Security and Compliance

How to Set Up a Directory Monitor for Automated Backups

Keeping critical files backed up in real time reduces data loss risk and simplifies recovery. This guide shows a practical, repeatable setup to monitor a directory and trigger automated backups when changes occur. It uses broadly available tools and applies whether you’re on Windows, macOS, or Linux.

Overview

  • Goal: Detect file changes (create, modify, delete, rename) in a directory and automatically back up affected files to a safe location.
  • Approach: Use a filesystem watcher to detect events and a script to perform backups. Options range from built-in OS tools to cross-platform utilities.

1. Choose your tools

  • Windows: Use PowerShell’s FileSystemWatcher or a third-party agent (e.g., Watcher services).
  • macOS: Use fswatch or launchd combined with a script.
  • Linux: Use inotifywait (inotify-tools) or systemd-path with a script.
  • Cross-platform: Use a language runtime (Python watchdog library, Node.js chokidar) for a single script that runs on all systems.

Assume a cross-platform Python approach for examples below (recommended for portability).

2. Prerequisites

  • Python 3.8+ installed.
  • pip available.
  • Sufficient permissions to read source and write to backup destination.
  • A destination for backups: local folder, network share, or cloud sync folder (Dropbox, OneDrive, Google Drive, etc.).

3. Install the watcher library

Run:

bash

pip install watchdog

4. Define backup strategy

Decide how you want backups to behave:

  • Mirror backup: replicate the directory structure and overwrite changed files.
  • Versioned backup: keep timestamped copies for each change.
  • Incremental backup: copy only changed files, optionally archive periodically.
  • Retention policy: how long to keep versions and how much space to allow.

Example decisions (used in sample script):

  • Incremental copy of changed files.
  • Keep versions by appending ISO timestamp to filename.
  • Retain versions for 30 days (requires optional cleanup task).

5. Create the watcher script (Python)

Save as monitor_backup.py:

”`python #!/usr/bin/env python3 import os, shutil, time from datetime import datetime, timedelta from watchdog.observers import Observer from watchdog.events import FileSystemEventHandler

CONFIG

SOURCE_DIR = r”/path/to/watch”# change this BACKUP_DIR = r”/path/to/backup” # change this RETENTION_DAYS = 30 # remove versions older than this VERSIONED = True # append timestamp to backups

def ensure_dir(path): os.makedirs(path, exist_ok=True)

def backup_file(src_path): rel = os.path.relpath(src_path, SOURCE_DIR) dest_dir = os.path.join(BACKUP_DIR, os.path.dirname(rel)) ensure_dir(dest_dir) base = os.path.basename(src_path) if VERSIONED: ts = datetime.utcnow().strftime(“%Y%m%dT%H%M%SZ”) name = f”{base}.{ts}” else: name = base dest_path = os.path.join(dest_dir, name) try: shutil.copy2(src_path, dest_path) print(f”Backed up: {src_path} -> {dest_path}“) except FileNotFoundError:

file may be temporarily unavailable

print(f”Source missing when attempting backup: {src_path}“) except Exception as e: print(f”Backup error for {src_path}: {e}“)

def cleanup_old_versions(): cutoff = datetime.utcnow() – timedelta(days=RETENTION_DAYS) for root, _, files in os.walk(BACKUP_DIR): for f in files: full = os.path.join(root, f) try: mtime = datetime.utcfromtimestamp(os.path.getmtime(full)) if mtime < cutoff: os.remove(full) print(f”Removed old backup: {full}“) except Exception as e: print(f”Cleanup error for {full}: {e}“)

class ChangeHandler(FileSystemEventHandler): def on_created(self, event): if not event.is_directory: backup_file(event.src_path) def on_modified(self, event): if not event.is_directory: backup_file(event.src_path) def on_moved(self, event): if not event.is_directory:

backup destination path for moved file

backup_file(event.dest_path) def on_deleted(self, event):

optional: note deletion; could also remove backups or leave versions

if not event

Comments

Leave a Reply