DataVines Logo
Back to Expertise
Case Study: Manufacturing

Manufacturing Case Study3

one hour to finalise and distribute. The plan was in the plants' hands by noon on the first Monday of the month — two and a half days earlier than the prior process allowed. The OTIF improvement materialised faster than the team had expected. The root cause analysis model, applied retrospectively to the prior six months of late deliveries, showed that 41 percent of OTIF failures had a single identifiable upstream cause: raw material availability had constrained a reactor line that the plan had not flagged as constrained because the inventory data had been three days old when the plan was built. With the planning data refreshing every four hours, those constraints were visible before commitments were made, not after shipments were missed.

The Challenge

The VP of Manufacturing had been staring at the same number for three quarters: 9.2 percent end-of-line rejection rate. Industry benchmark for their product category was 3 percent or below. Every quality review meeting produced the same discussion — possible causes identified, corrective actions assigned, and then four to six weeks later the rejection rate would briefly improve before reverting. Nobody had found the root cause because nobody had the data infrastructure to look for it systematically. Quality lab results lived in the LIMS (Laboratory Information Management System). Process parameters — mixing temperatures, blend times, ingredient addition sequences, line speeds, filler pressures — lived in the Rockwell FactoryTalk production historian. Production run records lived in the ERP. These three systems had never been connected. Every post-rejection investigation required a quality engineer to manually cross-reference the LIMS rejection record, then call the production supervisor to reconstruct the process parameters from shift logs and verbal memory, then try to identify what had been different about that batch. The blind spot was structural and expensive. At a 9.2% rejection rate on 180,000 daily units, the manufacturer was reworking or writing off an average of 16,560 units every day. Rework consumed production line time, ingredient cost, labour, and packaging material. Waste consumed the same, without producing a recoverable unit. The quality team was talented and motivated. They were investigating in the dark — looking at rejection events individually when the pattern was only visible in aggregate, across the full dataset of process parameters and quality outcomes. Where quality intelligence was structurally absent: • Three disconnected data systems — LIMS quality results, Rockwell FactoryTalk process historian, SAP production run records — with no integration and no mechanism to join a quality outcome to the specific process conditions that produced it • Post-rejection root cause investigations conducted manually — quality engineers cross-referencing paper logs and supervisor recollections, taking 2 to 5 days per investigation with no guarantee of reaching a definitive conclusion

No process parameter monitoring against quality-correlated specification windows — the process

ran within broad engineering tolerances, without any visibility into the tighter bands within those

tolerances that the data ultimately showed were correlated with quality outcomes

Quality performance reported at the monthly review level — by the time a pattern was identified in

the monthly data, 30 days of production had passed at the same rejection rate

No cross-facility quality benchmarking — the three facilities ran the same product lines with

different rejection rates and nobody knew why, because the quality and process data had never

been compared across sites

$2.8M in annual rework and waste costs with no systematic mechanism for identifying and

eliminating their root causes

The DataVine Solution

The Solution

  • The first question we asked was whether the rejection data could be joined to the process data at the batch level. The answer was technically yes — but it had never been done, because the batch identifiers used in the LIMS were different from the batch codes used in FactoryTalk and different again from the production order numbers in SAP. Building the bridge between those three identifier systems was the prerequisite for everything else. Building the quality-process data foundation • Built API integrations and file-based ingestion pipelines pulling quality inspection results (pH, viscosity, colour L*a*b* values, fill weight, microbiological counts, and pass/fail status) from the LIMS (LabWare); process parameter time-series data (mixing temperature, blend duration, ingredient addition timing, line speed, filler pressure, CIP cycle completion status) from Rockwell FactoryTalk; and production run records (product code, batch size, line assignment, ingredient lot numbers, operator ID, shift) from SAP — all loaded into a centralised Google BigQuery warehouse • Built a batch identifier reconciliation model linking LIMS batch codes to FactoryTalk production run IDs and SAP production orders — creating a unified batch master that connected every quality outcome to the specific process conditions and ingredient lots that produced it, for 18 months of historical production • Built dbt transformation models producing a clean, batch-level analytical dataset with standardised quality metrics and process parameter summaries per batch — one agreed methodology for process deviation calculation, one approach to detecting CIP cycle anomalies, documented logic for handling split batches and rework re-entry Building the quality analytics layer • Built a multivariate process capability analysis model identifying which process variables — and which combinations of variables — were most strongly correlated with quality outcomes across each product line, using gradient boosting feature importance analysis on 18 months of batch-level data • The analysis identified three primary drivers responsible for 71% of all rejections: mixing temperature exceedance above 74°C during the emulsification phase (strongly correlated with viscosity failures), ingredient addition sequence deviations (specifically out-of-sequence acid addition, correlated with pH failures), and CIP cycle completion anomalies preceding a production run (correlated with microbiological count failures) • Built real-time process monitoring alerts triggered at the identified critical control point windows — when mixing temperature approached 72°C during emulsification, a yellow alert reached the line operator and quality supervisor; above 73°C, a red alert triggered with an automated recommendation to pause blending and check the temperature control system before the batch reached the final ingredient addition stage
  • Built a cross-facility quality benchmarking model comparing rejection rates, process parameter
  • distributions, and quality outcome patterns across all three facilities for each shared product line
  • — identifying that the Ohio facility's rejection rate on the ranch dressing line was 4.1% versus
  • Indiana's 11.3% for the same product, and that the Ohio facility's mixing temperature profile
  • during emulsification was consistently 1.8°C cooler on average, operating within the optimal
  • window the model had identified
  • Building the quality intelligence dashboard
  • Built a Tableau dashboard suite with four views: a VP of Manufacturing quality performance
  • summary across all three facilities, a batch-level quality analytics view showing the process
  • parameter profile for any batch alongside its quality outcome, a real-time process monitoring view
  • flagging active deviations across all running production lines, and a cross-facility benchmarking
  • comparison
  • Built an automated daily quality briefing covering the prior day's rejection rate by facility and
  • product line, the top three process deviation types flagged, and any batch currently in hold status
  • — generated automatically and landing in quality, operations, and plant management inboxes
  • before the 7 AM production meeting

Operational Impact

Monthly plan generation time reduced from 3 days to 4 hours — 78% reduction in planning cycle

time, with the plan distributed to plants two and a half days earlier each month

31% improvement in on-time-in-full delivery rate within 60 days of go-live — from 71% to 93%

OTIF across all three plants

Planning data synchronised across all three plants on a 4-hour refresh cycle — replacing three

time-lagged manual exports with a single continuously updated planning foundation

Scenario modelling capability enabling real-time supply disruption impact assessment —

replanning for a reactor outage moved from a 2-day manual exercise to a 20-minute simulation

OTIF root cause analysis automated — late delivery investigation time reduced from 2 days per

incident to a same-day dashboard query

Customer priority allocation framework embedded in the planning model — constrained supply

decisions made consistently against agreed criteria, not individual planner judgment

WHAT COMES NEXT

DataVines is now building a demand sensing model for the manufacturer — using customer order history, weather patterns, agricultural season indices, and commodity price trends to generate a 12-week rolling demand forecast at the customer and product grade level. The goal is to shift production planning from a monthly order-book-driven exercise to a forward-looking process that positions reactor capacity against probable demand 8 to 12 weeks before orders are placed — reducing the proportion of the plan consumed by reactive short-cycle order management. CASE STUDY 03 · MANUFACTURING · FOOD & BEVERAGE · QUALITY CONTROL & PRODUCTION YIELD ANALYTICS 9% of Their Output Was Being Rejected at Final QC. Nobody Could Explain Why. A large-scale food and beverage manufacturer was experiencing a 9% end-of-line rejection rate across its three processing facilities — above the industry benchmark by a factor of three. DataVines connected process sensor data, quality lab results, and production run parameters for the first time, and built the analytics layer that identified the specific upstream process variables driving downstream quality failure.

The VP of Manufacturing described the cross-facility benchmarking analysis as the first time in his tenure that the quality team had a quantitative explanation — not a hypothesis — for why one facility was consistently outperforming another on the same product line. It was not equipment, not operator skill, not ingredient sourcing. It was 1.8 degrees Celsius during a four-minute window in the emulsification phase. The corrective action that followed was operationally straightforward once the root cause was identified: revised the emulsification temperature specification for the ranch dressing line from a 10°C tolerance band (65–75°C) to a 5°C band (65–70°C), tightened the real-time monitoring alert threshold accordingly, and retrained operators at all three facilities on the revised control point. The Indiana facility's rejection rate on that product line dropped from 11.3% to 3.4% within six weeks. The CIP cycle anomaly finding had a more significant operational impact. The model had identified that 34% of microbiological rejections occurred in the batch immediately following a CIP cycle where the final rinse conductivity reading had been outside the validated range — a parameter that was being logged in FactoryTalk but had never been connected to the quality outcomes that it predicted. A real-time alert was added for CIP final rinse anomalies, triggering a mandatory quality hold on the subsequent batch pending lab results. Microbiological rejection rate dropped 82% in the following 60 days.

End-of-line rejection rate reduced from 9.2% to 3.1% — a 66% improvement, saving $2.8M

annually in rework and waste costs

Root cause identification time for quality deviations reduced from 2–5 days of manual

investigation to a same-day dashboard query — 6x faster

43% reduction in quality-related production line stoppages — real-time process monitoring alerts

catching deviations before batches reach final QC

Three previously disconnected data systems unified into a single batch-level quality-process

analytical dataset — 18 months of historical data connected for the first time

Cross-facility benchmarking enabled the Ohio facility's best-practice process parameters to be

adopted at all three sites — turning local operational knowledge into a documented,

organisation-wide standard

Microbiological rejection rate reduced 82% after CIP cycle anomaly alert implementation — a

process control improvement that had been hiding in the data for years, invisible without the

analytical connection

WHAT COMES NEXT

DataVines is now building a predictive yield optimisation model for the manufacturer — using the process parameter combinations identified as most strongly correlated with quality outcomes to generate real-time process adjustment recommendations during production runs. The goal is to move from alert-based process monitoring (flagging when a parameter is outside its optimal window) to prescriptive process guidance (recommending the specific adjustments that will maintain the parameter profile within the optimal window throughout the entire run). CASE STUDY 04 · MANUFACTURING · STEEL & METALS FABRICATION · EXECUTIVE OPERATIONS INTELLIGENCE &

DATA PIPELINE AUTOMATION

The COO Got Her Operations Report on Friday. It Reflected Tuesday's Reality. A mid-sized steel fabrication and metals processing company running six fabrication lines across two facilities had an operations reporting process that took four days to produce and delivered data that was already half a week old. DataVines automated the entire operations data infrastructure — and gave the COO a live view of every line, every shift, every metric, by 6 AM every morning.