What “Agentic” Means for Turf Management
Written by
Valentine Godin

I’ve spent the last four years building technology for turf professionals. In that time, I’ve visited facilities across twelve countries, sat with greenkeepers at 5am before the first mow, and listened to the same frustration everywhere: “I have more data than ever, and I still feel like I’m deciding by instinct.”
That’s not because the data is wrong. It’s because the architecture is broken.
Most facilities now have a weather station, soil sensors, a fleet tracker, a nutrition plan in Excel, a spray log somewhere, and a playing quality assessment in a different app. Each one arrived with a dashboard. Each dashboard arrived with a login. Each login arrived with a promise that visibility would solve the problem.
It hasn’t. Because the problem was never visibility. It was connection.
Your weather data doesn’t talk to your nutrition plan. Your soil moisture readings don’t adjust the irrigation schedule on their own. The growth data from last season sits in a file nobody opened since November. You — the greenkeeper, the course manager — are the integration layer. You hold all of it in your head, make the call at 6am, and hope the timing was right.
That is not a technology failure. It is an architecture failure. The tools were built to display, not to think.
This season, that difference got a name.
What “agentic” actually means
In March 2026, NVIDIA’s Jensen Huang stood on stage at GTC and positioned what he called “agentic scaling” as the fourth law of AI progress — alongside more data, more compute, and longer reasoning. Days later, Microsoft launched Copilot Cowork, a multi-model system built with Anthropic’s Claude that coordinates agents across an organisation. They now track over 500,000 agents running internally. Ninety per cent of the Fortune 500 are using some form of AI copilot.
The word “agentic” has become the technology industry’s shorthand for AI systems that don’t just respond — they persist, remember, monitor, and act over time within defined boundaries.
That sounds abstract. It isn’t. Let me walk you through three levels — and I think you’ll recognise exactly where your current setup sits.
Level 2: The chatbot. You ask a question, you get an answer. “What was the average soil temperature last week?” It responds accurately. But it has no memory of what you asked yesterday. It doesn’t know your nutrition plan. It can’t connect the temperature answer to the fact that you’re about to apply a biostimulant that needs soil above 12 degrees to activate. It’s reactive. You have to know the right question to ask — which means you already need most of the answer.
Level 3: The agentic copilot. It monitors continuously. It holds context — your site’s soil profile, your annual maintenance plan, your product inventory, your historical growth data. It connects signals across domains: weather, agronomy, resources, playing quality. It doesn’t wait for you to ask. When conditions shift, it flags what matters, explains why, and recommends a specific action. After you act, it asks what happened — and uses that outcome to refine its understanding for next time.
Level 1: The dashboard. It shows you data. Soil moisture is at 22%. Temperature hit 28 degrees yesterday. Your fleet ran 14 hours. It waits for you to log in, find the number, and decide what it means. If you don’t look, nothing happens. The data exists. The intelligence doesn’t.
The gap between Level 2 and Level 3 is not incremental. It is structural. A chatbot is a search engine with better manners. A copilot is a colleague who has been reading the same data as you, remembers what happened last season, and is already thinking about next week.
This is where the entire technology industry is heading. Not just in turf — across every operational domain. From tools that display to systems that accompany. From reactive answers to persistent intelligence.
You are the logic
You ask, it answers
It thinks alongside you
Why does this matter particularly for turf? Because turf management is, by nature, a multi-domain problem. You are managing a living system influenced by weather, soil biology, hydrology, plant physiology, mechanical wear, and human expectations — simultaneously. No single data source tells you what to do. The answer always lives at the intersection. That is exactly the kind of problem agentic systems are designed to solve.
Three characteristics define a genuinely agentic system — and they are worth understanding, because every technology vendor will start using this language in the next twelve months:
Persistence. The system maintains context over time. It knows you applied iron sulphate three weeks ago, that rain washed most of it through, and that the colour response was below expectation. It carries that forward.
Orchestration. It connects domains that are usually siloed. Weather data informs agronomy. Agronomy informs resource planning. Resource planning informs fleet scheduling. The system reasons across the full chain, not within one slice.
Feedback loops. It learns from outcomes, not just inputs. Every recommendation becomes a hypothesis. Every result becomes training data. Over time, the system’s understanding of your specific site — your soil, your microclimate, your cultivars, your constraints — deepens in a way no static model can replicate.
What each level looks like through a real season
Let me make this concrete. Same facility, same April morning. GDD accumulation is tracking 15% ahead of the five-year average after a mild winter.
At Level 1 — you open your weather dashboard. You see the temperature graph trending up. You might notice GDD is running ahead, if you know to look. You cross-reference with your nutrition plan in Excel. You check the spray log — when was the last PGR application? You look at the calendar. You do the maths in your head. Maybe you catch that the first PGR window is arriving ten days early. Maybe you don’t — you had a tournament to prepare for and the dashboard didn't flag it.
Result: depends on whether you had time to look
At Level 2 — you ask the chatbot: “What’s my cumulative GDD?” It tells you. You ask: “When should I apply the first PGR?” It gives you a textbook answer based on general thresholds. But it doesn’t know your specific plan, your product inventory, or what happened last year when you applied at this GDD level. You still have to connect the dots.
Result: faster access, but the integration is still on you
At Level 3 — the copilot has been tracking GDD accumulation against your annual maintenance plan. It flags, unprompted: the first PGR window is arriving roughly ten days earlier than planned. It checks your product inventory — enough for one application, second unaccounted for. It recommends adjusting the schedule, proposes revised timing, and notes the soil temperature threshold that needs to be met before application. You review, adjust, apply.
Forty-eight hours later, the system checks the clipping yield data. It asks: did the growth suppression meet expectation? You indicate it was weaker than anticipated. The system logs that against the specific conditions — soil temperature, moisture, product rate, cultivar — and the next time a similar window appears, it adjusts its confidence accordingly.
Result: the system connected the signals, recommended action, and learned
That is one interaction in one week. Multiply it across nutrition, irrigation, disease pressure, overseeding timing, aeration scheduling, tournament preparation — and the architecture becomes clear. Not a dashboard that shows what happened. A system that accompanies the operator through the season, learning as it goes.
No single data point makes the difference. The value is in the connections between them, accumulated over time, specific to your site.
Where does your setup sit?
If you are evaluating your technology stack — or being pitched a new one — three questions cut through the noise.
Does it connect?
Weather, agronomy, and resources in one graph — or three logins?
Does it remember?
What went on the 5th green four months ago — and did it work?
Does it close the loop?
Recommend → act → measure → learn → better next time?
If the answer is no to any of these — you have a dashboard, not a copilot.
Does it connect domains, or just display them? If your weather data, agronomy data, and resource data live in separate interfaces with separate logins, you have dashboards. The integration is still happening in your head.
Does it remember? Ask your system what you applied to the 5th green four months ago and what the result was. If it can’t answer — or if the answer requires you to manually search through logs — it has no operational memory. Without memory, there is no learning.
Does it close the loop? After you take an action based on the system’s information, does the system track the outcome and use it to improve the next recommendation? If not, you are running on static models that will never get smarter no matter how long you use them.
Most systems today sit firmly at Level 1, with some beginning to offer Level 2. That is not a criticism — it reflects where the technology has been. But Level 3 is arriving, and it changes the relationship between the operator and the technology entirely.
Where this is heading
The copilot model is moving toward what I’d describe as intelligent accompaniment. Technology that doesn’t demand attention — it earns it by being right often enough that you trust it, and transparent enough that you can verify it.
The operator remains the pilot. The agronomist’s judgement, the greenkeeper’s intuition, the course manager’s experience — these are not replaced. They are supported by a system that handles the connective work: tracking conditions across time, linking actions to outcomes, surfacing what matters before it becomes urgent.
The best technology disappears into the work. You stop noticing the system and start noticing that your decisions are better-timed, better-informed, and better-connected to what actually happened last time.
I’m curious where your setup sits today. Level 1, 2, or somewhere in between? What does your current technology stack look like — and where does the connection break? I’d genuinely like to know. Share your experience — it helps all of us understand where this industry actually is, not where the marketing says it should be.
About Valentine Godin
Founder and CEO of Maya Global
Topics
Latest Articles
Back to all posts
Why More Data Doesn’t Mean Better Decisions
Factory owners spent 40 years bolting electric motors onto steam-era factory layouts before anyone thought to redesign the floor. Turf management is doing the same thing with data.

José Tomás Agulló, the greenkeeper who makes golf a dialogue with nature
"Do things as naturally as possible"