Blog post

Scaling ML Pipelines Means Reducing Hidden Manual Work

ML pipelines usually fail to scale because they depend on undocumented manual steps around data preparation, retraining, packaging, and release coordination.

  • MLOps
  • Airflow
  • MLflow
  • CI/CD

Problem

Pipeline discussions often focus on tools, but scaling problems usually come from hidden manual work. If model updates depend on people remembering sequences, finding the right data snapshot, or manually coordinating releases, the pipeline is not actually scalable.

Where teams get stuck

  • data preparation logic lives in notebooks or ad hoc scripts
  • model artifacts are hard to compare or reproduce
  • release steps are only partially automated
  • incident response is slowed by missing lineage and poor observability

What improves scaling

The biggest gains usually come from explicit process boundaries:

  • track experiments and artifacts in a way other engineers can inspect
  • automate orchestration for recurring data and retraining tasks
  • package models through repeatable release steps
  • keep lineage and validation visible during deployment

Tradeoffs

Standardization adds upfront cost. The payoff appears when update frequency increases, team size grows, or regulated environments demand traceability. At that point, reproducibility becomes a delivery feature rather than documentation overhead.

Production lesson

Scaling ML systems is less about adding new infrastructure and more about removing hidden operational dependencies. The team moves faster when the workflow is visible, inspectable, and repeatable.

Related projects

Case studies where these tradeoffs showed up in practice.

Project Legal TechPublic Sector AI

PGDF

OSIRIS Legal-Fiscal AI Workflows

Data Scientist · May 2023 - May 2024

AI delivery for PGDF legal-fiscal operations, spanning production APIs, supervised and semi-supervised models, active learning, and early LLM exploration for document-heavy institutional workflows.

Primary impact

Brought governed ML workflows and production APIs into legal-fiscal operations, while designing active-learning paths for longer-term model adaptation.

  • FastAPI
  • Active Learning
  • MLflow
  • DVC
  • LLM

Key outcomes

  • Production APIs connected model outputs to PGDF internal systems
  • Active-learning loop designed to reduce model drift over time
Read project

Next step

Want the delivery context behind this line of thinking?

The project pages show where these technical decisions had to work inside real institutions, teams, and operational constraints.