Komatsu, one of the world's largest heavy equipment manufacturers, is projecting savings of over 22,000 hours per year after deploying Retool to automate their operations workflows.

That is the equivalent of roughly 10 full-time employees doing nothing but manual work, freed up in a single initiative.

Case studies like this are useful. They prove that this scale of result is real, that manufacturing companies are achieving it, and that the tools to get there exist and are accessible. They are not just for enterprise companies with unlimited budgets and dedicated engineering teams.

But case studies are also edited. They are published by the vendor. They lead with the headline number and end before the hard part. What they rarely cover is what the build actually required: the decisions made before the first line of automation was written, the failure modes that had to be designed around, and the operational conditions that made the result possible.

This post covers both: what Komatsu did and why it worked, and what a continuous automation engineer would tell you before you try to replicate it.

What Komatsu Actually Built

Komatsu's operations teams were managing customer service and equipment data across multiple disconnected systems. Frontline staff handling customer inquiries and equipment issues had no unified view. To answer a customer question, an operator might need to context-switch between two or three platforms to piece together the complete picture: customer account, machine history, service records, active cases.

That context-switching is not just slow. It is error-prone. Every platform transition is an opportunity to miss something, misread something, or work from slightly stale data.

Komatsu used Retool to build workflow automation apps that consolidate customer and machine data into a single interface. Frontline staff now open one screen and see everything they need. The platform gives them a complete customer and machine view without the multi-system lookup.

The projected result: 22,000+ hours saved per year.

What made it work was not the technology. The technology (Retool) is straightforward to deploy. What made it work was the specificity of the problem: a defined user (frontline ops staff), a clear pain point (context-switching between platforms), and a discrete data source to start with (customer and machine records). The scope was tight before the build began.

What the Case Study Doesn't Cover

This is where a published case study ends and where an experienced automation engineer picks up. The following are the five things Komatsu almost certainly had to navigate, and that you will too.

1. ERP API Throttling Is Real

Connecting live ERP data to a Retool dashboard sounds like a single integration step. In practice, most ERPs limit how frequently their API can be called. Some throttle at a few hundred requests per minute. Others have daily caps. If your dashboard is refreshing frequently across multiple panels and multiple concurrent users, you can hit those limits quickly.

The solution is a caching layer between your ERP and the dashboard. Rather than querying the ERP directly on every page load, you cache the data in a fast-access database (Postgres, Redis, or a simple Cloud Function) and refresh it on a defined schedule. The dashboard reads from the cache. The ERP gets called on a controlled cadence.

This is not a complex architecture, but it needs to be planned before building, not retrofitted after you hit your first API limit.

2. Scope Your Credentials Precisely

A Retool dashboard that reads live production data needs API credentials configured with read-only access, scoped to exactly the data the dashboard uses. This sounds obvious. It is frequently skipped in the initial build because it slows things down.

The risk: a credential with write access, or with access to more data objects than the dashboard needs, creates exposure. One misconfigured query or workflow step and a dashboard action can inadvertently trigger a write operation in the ERP. This has happened. It is recoverable, but it is disruptive and erodes trust in the tool immediately.

Take the 30 minutes to configure the credentials correctly from the start. Scope them to the minimum required access. Document what they are and where they are used.

3. The 22,000 Hours Is Projected, Not Measured

Projected savings are based on an assumption about adoption. If frontline staff use the new dashboard instead of their previous process, the projected hours materialize. If they revert to their old habits, they don't.

Adoption is the variable that case studies almost never publish because it is harder to quantify and less impressive as a headline. But it is the variable that determines whether the result is real.

Before the build, designate one internal owner responsible for rollout and adoption. Define what "using the dashboard" looks like in practice. Plan a 30-day period where you actively monitor whether the team is using the new interface or defaulting to their previous system. Build feedback into the process so the dashboard can be adjusted based on real usage.

A dashboard that 80% of the team uses daily delivers a different result than one that was demoed successfully and then quietly ignored.

4. Start With One Data Source

The scope of Komatsu's initial build was focused: customer and machine records. Not every data source in the organization. Not a full operations platform. One problem, one data source, one user type.

The temptation when scoping a similar project is to connect everything: ERP, WMS, Shopify, spreadsheets, carrier APIs. That scope produces a project that takes months to deliver, involves too many stakeholders, and often never reaches production because something always needs to be resolved before the next integration can start.

Connect one data source first. Build the panel. Get it working, tested, and trusted by real users. Then expand. The subsequent integrations will be faster because the infrastructure (authentication, caching layer, refresh logic) is already in place.

Most of the failed ops dashboard projects come from teams that tried to build the complete vision in week one.

5. Define Your Refresh Strategy Before You Build

Real-time data, near-real-time data, and cached data are three different architectures with different tradeoffs. The right choice depends on your ERP, your data volumes, how frequently the underlying data changes, and what your vendor contract allows.

For most manufacturing operations dashboards, a 5 to 15 minute refresh cycle on most panels (with exception flags being more frequent or webhook-driven) is sufficient. The team does not need inventory levels updated every 10 seconds. They need inventory levels that are accurate to within the last 15 minutes and flagged when something crosses a threshold.

Deciding this upfront affects how you build the caching layer, how you configure the ERP API credentials, and how you structure the data pipeline. Retrofitting a real-time polling architecture onto a system built for hourly refreshes is expensive. Making this decision before the first panel is built is not.

Why This Matters for Mid-Market Manufacturers

Komatsu is a global company with significant resources. The scale of their implementation reflects that. But the underlying problem they solved, and the tooling they used to solve it, is directly accessible to mid-market manufacturers with a fraction of that infrastructure.

Retool does not require a large engineering team to build on. n8n does not require an enterprise IT department to maintain. The barriers to building an ops dashboard with real-time data are lower now than they have been at any point in the last 20 years.

The conditions that made Komatsu's result possible, a specific problem, a defined user, a single starting data source, and careful scoping, are conditions any operations leader can create regardless of company size.

The 22,000 hours came from focus, not scale.

Before You Build Anything

Pick one operational question your team answers manually every day. Not five questions. One.

Define the data source that would answer it if the data were accessible in real time. Identify the user who would benefit most from seeing that answer in a single interface rather than across multiple systems.

That is the scope of your first build.

Book a free call to scope your build before you start