mot-r blog | avoiding stupidity at work

Riding the J-Curve of Productivity in Legal Ops

Written by Mike Tobias | Apr 24, 2025 1:07:27 PM

Last week, I commented on a LinkedIn post that discussed CLM failures and why they are occurring at such high rates. And the human toll these failures exact. 

While some general reasons were proposed by the author of the post, which I agreed with, I added a reply to introduce what I believed to be one of the most overlooked reasons that CLM (and other Legal Tech implementations) fail; neglecting the J-Curve of Productivity. So I thought I would elaborate on that topic this week.

 

What is the J-Curve of Productivity?

A J‑curve is simply a time‑path that dips below the starting line before turning sharply upward, tracing a “J” shape. Applied to productivity, it describes situations where measured output per worker (or per dollar of capital) first falls after a big change—then rebounds and ends higher than before as people climb the learning curve and complementary assets are in place.

Economists Erik Brynjolfsson, Daniel Rock, and Chad Syverson formalized the idea in their 2021 paper “The Productivity J‑Curve: How Intangibles Complement General Purpose Technologies.” They show that when firms adopt a technology, the intangible investments needed to make it useful are systematically under‑measured. That mis‑measurement masks true productivity for several years, producing the down‑then‑up pattern in national statistics. So this work has interesting implications to both large-scale technology introductions (like AI) in society, and small scale introductions in your teams.

 

Context

What the J Looked Like

Electrification of U.S. factories (1890‑1920)

Output per worker actually fell for a decade as factories rearranged shop floors around electric motors; the payoff came only after workflow redesign.

PCs & the internet (1980s‑1990s)

Solow’s “computer age everywhere but in the productivity statistics” paradox gave way to a late‑1990s surge once complementary software and skills diffused.

Today’s AI & generative models (2020s)

Firms see early efficiency hits as they clean data and retrain staff; researchers expect a visible productivity boost later this decade.



Why does it matter?

Regardless of what a sales rep is compensated for telling you, there are no all-singing, all-dancing, magical Legal technology solutions that will deliver meaningful results without the previously mentioned “intangible investments.”

 

Intangible Category

Typical Activities for a Legal‑tech Rollout

Why it Matters

Data & Knowledge Assets

‑ Bulk extraction and cleaning of legacy data

‑ Building a taxonomies and classifications

‑ Mapping classifications to analytics and reporting

Good data are the fuel for automation and analytics; without them AI models and search tools under‑perform.

Process Redesign & Change Management

‑ Re‑engineering intake, approval, and filing flows

‑ New SLAs, escalation paths, and KPIs

‑ Communications, playbooks, internal marketing

Productivity comes from new ways of working, not the tool alone.

Human Capital

‑ Up-skilling staff on templates, dashboards, and AI prompts

‑ Creating a “champion” network

‑ Time for experimentation and feedback

Skill gaps are the #1 reason AI and CLM projects stall.

Organizational / Governance Capital

‑ Defining data‑ownership and model‑risk policies

‑ Updating retention and privilege rules

‑ Steering committee cadence

Clear governance lets the tech scale safely and keeps regulators happy.

Continuous‑improvement Capability

‑ Instrumenting dashboards

‑ A/B tests on workflow tweaks

Prevents the curve from flattening after go‑live.



How does it apply to Legal Tech Implementations?

You should plan on several dollars of complementary spending for every dollar of licenses, especially early on. If the project is successful, the invisible balance-sheet of playbooks, templates, and new skills can easily be worth an order of magnitude greater than the hardware/software on which it rests.

 

A Practical Takeaway for Budget Planning

Stage of Adoption

Tech Spend (illustrative)

Intangible Spend that Same Year

Cumulative Intangible Asset Stock

Pilot / Year 1

$1 m

$3 – $4 m (data, redesign, training)

$3 – $4 m

Scale‑out / Year 2‑4

$1 m p.a.

$1 – $2 m p.a. (optimisation, governance)

$6 – $10 m by Year 4

Steady State

$1 m p.a.

$0.5 – $1 m p.a. (tuning, refresh)

Can easily reach ≈ 10X the original tech base over several years



Applying the Science to Avoid Failures & Harm

  1. Budget for a valley: Plan extra time and money for training, data work, and process redesign.

  2. Track leading rather than lagging metrics: Early indicators like cycle time, template accuracy, or defect rates turn positive well before aggregate productivity does.

  3. Communicate the curve to stakeholders so the temporary dip is understood, not mistaken for project failure.

  4. Invest in the complements—human capital, new workflows, and governance—because they flatten the dip and steepen the rebound.

 

How is this Even Possible with Small Budgets and No Slack Time?!

I recognize most of you will be eye-rolling at the 1M example budget for technology in the preceding example. Most in-house groups will struggle to get 1/10th of that budget. So how is it that you can implement technology, and make the intangible investments that are required to succeed when you have a small budget and zero time available from your team? 

Selecting solutions that include some or all of these:

  1. Pre-built or easily templated configurations - eliminate lengthy configuration timeframes by using or modifying provided examples.
  2. Start-small solutions that scale - solve a single problem that causes waste and overwork to validate technology and approach, then scale it up when possible.
  3. Consistent, reusable design, data and functional patterns - with a singular solution that uses a consistent design and functionality approach requires less training and learning.
  4. Usage and performance analytics baked-in - analyze and improve performance with pre-built reports and dashboards while enabling multiple data views.
  5. Budgeting backward from constraints - knowing the J-curve math means ($1 for tech/$3-4 for intangibles) you can allocate the right amount to technology vs. the intangibles so you can drive success on small, iterative projects.
  6. Compounding integrations of previous investments - thoughtfully integrate existing (ie. currently used) technology into your solution to compound your capabilities with less learning.
  7. Recycling waste into improvements - focus on improving operating practices that are wasteful and recycle the recaptured time (ie. new Slack time) into further growth improvements and scope.

Apply the Evidence to Avoid Failures

You are already overwhelmed by the volume and scope of work you have to contend with. So you don't have to time to learn all these lessons on your own by trial and error. That makes applying the best of what is already known about knowledge-centric operations and technology, like the J-curve of productivity, even more important.

Don't fall prey to making operational decisions based on management myths, anecdote and sloppy benchmarking. Use the best evidence-based practices already proven to work.

We'd love to hear about your challenges and to provide support where we can. No strings. Leave a comment or reach out today to discuss.