Projects

Tools and systems I've built to solve real problems in production environments. Much of my strongest work is proprietary — built for internal use and not publicly available. The open-source projects below are representative; the case studies at the bottom reflect the kind of operational work that doesn't live on GitHub.

Coil

Problem: Existing tools for turning Python projects into executables — PyInstaller, cx_Freeze, Nuitka, py2exe — are all painful. Hidden imports, missing DLLs, spec files, hook scripts, cryptic errors. You spend more time fighting the tool than building your app.

Approach: Coil takes an opinionated approach: directory in, executable out. Point it at your project folder, and it handles the rest. Auto-detects entry points and dependencies, bundles an embedded Python runtime (no Python needed on target), and produces a single portable .exe or bundled directory. It also includes a built-in decompiler to recover your own source when you need it.

Outcome: Published on PyPI as coil-compiler. Supports portable and bundled build modes, GUI auto-detection for frameworks like tkinter/PyQt/PySide, and cached builds for faster iteration. This is my flagship project.

Python C CLI PyPI
View on GitHub →

Feather

Problem: In production imaging environments, oversized TIFF and JPEG files slow down processing pipelines, eat storage, and create bottlenecks when moving files across networks. Manual resizing doesn't scale.

Approach: Built a desktop application with a PyQt5 interface that dynamically resizes and compresses images to target file sizes. Uses multithreaded batch processing with a thread-safe progress tracker, handles both TIFF and JPEG formats, and includes memory management with forced garbage collection to handle large batches without crashing.

Outcome: Used internally to process large volumes of scanned documents. Handles batch jobs across network directories with visual progress tracking and error logging.

Python PyQt5 Pillow Multithreading
View on GitHub →

File Processor

Problem: Production workflows require converting large volumes of PDFs to TIFF or JPEG, and converting between image formats — all across network shares, reliably, without manual intervention. Off-the-shelf tools don't handle the scale or the environment.

Approach: Built a multiprocessing-based file processor with a PyQt5 GUI. Features a job queue manager with configurable processing profiles, automatic CPU core rebalancing, system tray integration, and support for concurrent job execution across network directories. Handles both frozen (compiled) and development modes.

Outcome: Runs daily in a production environment, processing batch conversions across shared network folders. Separate modules handle PDF-to-TIFF, PDF-to-JPEG, and JPEG-to-TIFF pipelines with job management and error recovery.

Python PyQt5 Multiprocessing Network I/O
View on GitHub →

Study Aggregator

Problem: Processing DICOM medical imaging studies from CDs, ZIP archives, and network drives is tedious and error-prone. Staff need organized patient reports quickly, and existing tools don't handle the variety of formats, encryption, and edge cases found in the real world.

Approach: Built a Windows desktop application that reads DICOM files from directories, encrypted/unencrypted ZIPs (nested up to 10 levels), and optical drives. Intelligently merges patient studies using ID, name, and DOB matching with conflict detection. Outputs formatted reports via clipboard or PDF. Integrates into the Windows right-click context menu for seamless workflow. Multithreaded scanning at 4x CPU cores (max 32 threads) for fast processing.

Outcome: Distributed via installer with auto-update support. Used daily in a production environment to process incoming medical imaging studies. Handles edge cases like extensionless DICOM files, AES-encrypted archives, and mixed-format directories.

Python DICOM Multithreading Inno Setup
View on GitHub →

Sanitized summaries of proprietary work. No internal details — just the shape of the problem and what I did about it.

Document Processing Automation System

Context: High-volume litigation support environment processing thousands of documents daily across multiple formats, sources, and departments.

Problem: Document intake, classification, and conversion were heavily manual. Staff were hand-sorting files, running individual conversions, and manually tracking job status. Bottlenecks compounded during peak volume and errors were common.

Approach: Designed and built an end-to-end automation pipeline covering OCR, format conversion, file classification, and job tracking. Integrated with existing network storage and production hardware. Built operator-facing tools for monitoring and intervention.

Outcome: Reduced manual processing time significantly. Eliminated entire categories of errors. System runs daily with minimal oversight.

Python OCR Automation Internal

Production Environment Stabilization

Context: Mixed production environment with networked printers, shared storage, Active Directory, and multiple interdependent internal tools.

Problem: Recurring failures across the production floor — print jobs stalling, network paths breaking, tools losing connection to shared resources. No single root cause; issues spanned DNS, firewall rules, NAS permissions, printer firmware, and application configuration. Previous attempts to fix individual symptoms hadn't addressed the systemic fragility.

Approach: Mapped the full environment: infrastructure, hardware, software dependencies, and user workflows. Identified failure patterns across layers. Rebuilt DNS/DHCP configuration, restructured NAS permissions and backup policies, updated firewall rules, performed full maintenance on production printers, and standardized application deployment.

Outcome: Environment stabilized. Recurring failures resolved. Established monitoring and documentation to prevent regression.

Infrastructure DNS/DHCP Hardware Internal