Two things kill first automation projects. Overscoping kills the first kind: the team tries to connect five systems, involve eight stakeholders, and ship a complete operations platform in month one. The project drags into month four and gets deprioritized before it reaches production.
Underscoping kills the second kind: one small workflow gets automated, it works fine, and then nothing follows. No second build. No expansion. The automation sits as a one-off proof of concept while the team goes back to everything else they were doing.
The difference between a first build that compounds into a broader automation practice and one that stays a single proof of concept is almost always what happens in the first 90 days. Not the tool. Not the budget. The pattern.
This post walks through what the right 90-day pattern looks like, what goes wrong at each stage, and how to set it up before the build begins.
Ninety days is long enough to build, test, and measure one workflow in production. It is short enough to stay focused and maintain internal momentum.
Projects that don't show measurable value in the first 90 days are significantly more likely to be deprioritized before a second build begins. The first automation is not just a workflow improvement. It is the proof of concept for the entire practice. If it doesn't produce a visible result in 90 days, the internal case for the next build weakens.
Keeping the first build to one workflow, one data source, and one owner is not a limitation. It is the architecture of a successful 90-day outcome.
The goal of the first 30 days is a working automation in production. Not a pilot. Not a staging environment test. A live workflow processing real data.
What this requires:
A completed readiness audit on the target workflow (covered in yesterday's post). API credentials confirmed and tested against the production ERP, not just the sandbox. A written scope document that defines: trigger, inputs, outputs, exception paths, and the definition of success. One named internal owner who has agreed to monitor the build for the first 30 days after go-live.
What gets built:
The trigger: the discrete event that starts the workflow (a form submitted, a CRM opportunity created, a threshold crossed). The core automation path: data pulled, processed, and acted on. The exception path: what happens when the expected input is missing, wrong, or flags for human review. The notification layer: the owner gets an alert when the automation runs, and a different alert when it fails.
What can go wrong:
Scope creep. The most common failure mode in the first 30 days is adding a second workflow before the first one is live. Resist it. Every addition to scope in the first 30 days pushes the go-live date and reduces the likelihood of hitting the 90-day measurement milestone.
ERP API access issues. Most ERPs have APIs. Not all of them are documented clearly, and some have rate limits or permission structures that are not obvious until you try to connect. Build in a week of buffer for API troubleshooting.
The automation is live. Real data is flowing through it. And the edge cases that weren't visible in the documented workflow are surfacing.
This is not a sign the build failed. It is a normal and expected part of deploying a production automation. Every workflow has undocumented exceptions that only appear when real users with real data interact with the system.
What this phase looks like:
The internal owner monitors the automation daily, not just when something breaks. Every exception that surfaces gets documented and either added to the automation's logic or flagged as an intentional manual step. The team using the new process is observed, not just surveyed. If 30% of the team is still using the old process, that is a signal, and the source of that friction needs to be identified.
What can go wrong:
Passive monitoring. The owner checks the error log when something is reported, but doesn't proactively review the automation's performance. This means slow accumulation of undocumented edge cases that degrade the automation's reliability over time.
No feedback loop. The team using the automation notices things that don't work well but has no channel to report them. Build a simple weekly feedback mechanism: a Slack message, a short form, or a standing 15-minute check-in. The automation improves faster when the people using it have a voice in how it runs.
By day 61, the automation has been running in production for at least 30 days. Real usage data exists. Actual time saved can be calculated, not estimated.
What to measure:
Time saved per instance: how long did the manual process take, and how long does the automated version take? Volume: how many times has the automation run in 30 days? Error rate: how often did it produce an exception that required manual intervention?
This is where projected savings become measured savings. The number may be lower than the projection (adoption was incomplete) or higher (the edge cases that required manual intervention were fewer than expected). Either way, it is the real number.
What comes next:
If the automation is performing, scope the second workflow. The infrastructure is already in place: API credentials are confirmed, the ERP connection is tested, the monitoring setup exists. The second build is faster than the first.
If adoption is lower than expected, identify the friction point before scoping the second workflow. A second build on a team that hasn't fully adopted the first one produces two partial-adoption automations instead of one successful one.
Here is what this looks like for a mid-market manufacturer running a quote-to-order automation:
Days 1-30: API access to ERP confirmed in week one (took longer than expected, rate limiting required a caching layer). Scope finalized: trigger is CRM opportunity created, standard quotes auto-generated, non-standard quotes routed to approval. Automation live in production by day 28.
Days 31-60: Three edge cases surface: quotes with international ship-to addresses fail because the ERP API returns a different address format than expected. Multi-line quotes with more than 50 SKUs hit the API rate limit. One product category requires a pricing approval from a person who is not in the CRM system. All three get resolved by day 55. Adoption is at 70% of the sales team by end of month two.
Days 61-90: Automation has run 140 times. Average time from CRM opportunity creation to quote sent: 22 minutes for standard quotes (previously 3-5 business days). Adoption at 85% and rising. Identified second workflow to scope: automated follow-up sequencing for quotes not opened within 72 hours.
Answer three questions before the build begins:
Which one workflow? Not the most complex one. The one that scores highest on the readiness audit. The one where the data is accessible, the process is documented, the owner is named, and the team is willing.
Who is the owner? Not a committee. One person with the access and accountability to monitor the automation daily for 90 days and escalate when something breaks.
What does success look like at day 90? Define the number before you start. Time saved per instance, volume processed, or error rate reduced. The measurement at day 90 is only meaningful if you agreed on the target at day one.
Check if your first workflow qualifies for our 3x ROI guarantee before you build