Compassionate AI - The Definition
UA
Unknown Author

What Makes AI "Compassionate"? A Framework for Animal Welfare Technology
How do we build AI systems that genuinely care about animal welfare—not just as a side effect, but as a core purpose?
At Resonetta, we're building AI tools for dairy farmers. Our models can predict milk output, identify health risks, and recommend interventions. But as we develop these systems, we've been wrestling with a deeper question: Are we building compassionate AI?
It's a question worth taking seriously. The term "compassionate AI" gets thrown around a lot, but what does it actually mean? Can a mathematical model—lines of code optimizing an objective function—ever be truly compassionate?
Here's what we've learned.
The Uncomfortable Truth About Most "Welfare-Friendly" AI
Let's say we build a model that predicts milk production, and it discovers that cows who get regular brushing produce slightly more milk. The farmer installs automated brushes. The cows are happier. Production goes up. Win-win.
But is that compassionate AI?
I'd argue no. That's production AI that happened to stumble into a welfare benefit. The cow's happiness was never the goal—it was a lucky correlation.
Here's the test: Would the model still recommend brushing if it had zero effect on milk output but made the cows significantly happier?
If the answer is no, then welfare isn't a value in your system. It's an accident.
A Hierarchy of Compassion in AI Systems
As we've thought through this problem, we've identified five distinct levels of how AI systems can relate to animal welfare:
Level 1: Compassion as Byproduct
The system optimizes for production or efficiency. Welfare improvements happen incidentally when they correlate with output. This isn't compassionate AI—it's luck that could reverse the moment the correlation changes.
Level 2: Compassion as Constraint
The system optimizes for production but with hard limits: "Never recommend anything that increases lameness risk above X threshold." This is better—it prevents the worst outcomes—but welfare remains secondary. It's a floor, not a goal.
Level 3: Compassion as Co-Objective
Welfare metrics are explicitly included in the objective function alongside production. The system actively balances both. Now we're getting somewhere—the AI is genuinely "trying" to improve welfare, not just avoiding harm.
Level 4: Compassion as Primary Lens
The system is designed first and foremost to identify suffering, understand its causes, and recommend interventions to reduce it. Production benefits become the co-benefit rather than the driver. This is what we'd call genuinely compassionate AI.
Level 5: Compassion as Process
Perhaps the most nuanced level: the AI doesn't make decisions for you. Instead, it surfaces tradeoffs honestly. "This intervention will increase output by 3%, but behavioral indicators suggest increased stress." It gives farmers the information to make compassionate choices themselves, treating them as moral agents rather than optimization targets.
But Wait—Can Math Really Be Compassionate?
Here's where we need to be precise about something: AI doesn't need to feel compassion to enact compassionate outcomes.
A hospital doesn't "feel" compassion. But it can be designed and operated in deeply compassionate ways. The compassion lives in the choices humans make:
AI is an instrument of human compassion, not a source of it. The gradients don't care. The loss function doesn't feel. But what you choose to put in the loss function—that's where your values live.
When we embed welfare metrics as first-class objectives, when we design systems that surface suffering rather than hide it, when we build tools that trust farmers to make ethical choices with good information—we're encoding human compassion into technological systems.
That's not less meaningful because the AI doesn't "feel" it. It might actually be more meaningful, because it scales human care to contexts where no human could be present.
What Compassionate AI Actually Looks Like
So what would it take to move from Level 1 (welfare as byproduct) to Level 4 or 5 (genuine compassion)? Here's what we're working toward:
Welfare as an explicit output. Don't just predict production—predict behavioral indicators of contentment, stress levels, social interaction quality. Show farmers both numbers.
Models that predict welfare directly. Train on welfare outcomes, not just production proxies. Lameness risk, fear responses, play behavior. These deserve their own models, not just correlations in a production model.
Surface the divergences. The most important moments are when welfare and production don't align. A compassionate AI doesn't hide those tensions—it illuminates them. "This will boost output, but here's the welfare cost." Then it trusts the farmer.
Value welfare even when it's economically "irrational." If brushing adds $2/cow in milk but creates $50/cow worth of happiness (however we'd measure that), does your system acknowledge both? Does it treat welfare as having intrinsic value, or only instrumental value?
A Working Definition
After all this thinking, here's where we've landed:
Compassionate AI is a system where welfare is a first-class objective—not a byproduct, not a constraint, but something the system explicitly values and optimizes for, even when it doesn't serve other goals.
Or, put another way:
Compassionate AI isn't about making machines that feel—it's about encoding human values of care and welfare into the objectives, constraints, and outputs of systems that shape how we treat sentient beings.
The compassion comes from us. The AI carries it forward.
Why This Matters
We're at a pivotal moment in agricultural technology. AI systems are becoming more powerful, more autonomous, more influential in how farms operate. The values we embed in these systems now will shape the lives of billions of animals for decades to come.
If we build AI that treats welfare as a lucky byproduct, we'll get welfare improvements only when they happen to align with production. And we'll get welfare declines when they don't.
But if we build AI that treats welfare as a core value—measured, optimized, surfaced, respected—we create tools that extend human compassion into every barn, every herd, every decision point where a farmer faces a tradeoff.
That's the AI we're trying to build at Resonetta. Not because our code feels compassion. But because we do, and we're determined to build systems that carry that forward.
Paul Mineau is the founder of Resonetta, where we're building AI tools for compassionate dairy farming. Learn more at resonetta.com