Anemic Models vs Rich Domain Models


Expert Member
May 16, 2008
Hey all which do you prefer?

I have seen people fight passionately on both sides, which do you guys prefer and what do you encounter mostly in the work space? Most of the corporates I've worked for have followed a strictly no logic in the data models view where you have thick services and the models are only there for storing data. On personal projects I've tried to follow more of a DDD approach though.


Expert Member
Jul 29, 2015
A topic close to my heart... I am a big proponent for DDD (Rich Models). What most people refer to as data models are mostly DTOs. I.e. objects purely for transporting data across boundaries e.g. outward facing services or DB. Then these DTOs get used inside the domain, resulting in procedural code (non-OO) and anemic models. Made those mistakes in the past.

For me, the main challenge was to properly hydrate a Domain Object from the DB. We know that a Domain Object has to protect it's state. But how can you write that object's state to a DB but then recreate that object at a later stage from the DB if you do not have access to it's internals. Don't say 'use EF'. EF is a persistence framework and has no business in the domain. Then when you have an object that is the aggregate root e.g sales order, the sales order needs to be able to lazy load it's lines. So how would that work? Etc Etc.

Anyway would love to discuss this further


Executive Member
Apr 15, 2005
Big topic
The two notions discussed here are arguably more intrinsically tied to the approaches in the OO paradigm, because for example in FP there's really no concept of an anemic domain model.

OO tends to encourage encapsulating data and behavior (methods) together in an object, because these objects are expected to have state and all the behaviors (methods) that mutate the object's state are expected to be encapsulated within the object.

In FP where data is immutable by default; it is more common to keep behavior (functions) and data in seperate modules.

Expression Problem
When deciding on the approach it's important to understand the underlying problem, which is collectively known as the Expression Problem:

"The expression problem is a new name for an old problem. The goal is to define a datatype by cases, where one can add new cases to the datatype and new functions over the datatype, without recompiling existing code, and while retaining static type safety (e.g., no casts)."

Basically it's problem of extensibility of programs that manipulate data types using defined operations; however as our programs evolve, we are faced with the challenge to extend them with new data types and new operations, but we want to preferably avoid modifying existing quality assured programs and battle hardened code simply because input / output schemas / parameters required adjustment to accommodate the new requirements.

Hence there's a strong desire to respect the integrity of existing abstractions, and a strong desire for our modifications to be separate:
  • separately coded
  • separately compiled
  • separately quality assured
  • separately deployed
  • etc.
Current solutions
FP's approach whilst being substantively different from OO, has not yet concretely solved this; examples include:
  • Swift : Nullable / Non-Nullable (Optional) and Opaque Result Types
  • Kotlin : Nullable / Non-Nullable
  • Scala 3 (dotty): Union types that are mathematically commutative.
Whilst these are all good ideas, they still are fairly rigid approach to the problem.

Proposed Flexible Approach
The only recent proposal that IMO appears to more concretely address flexibility is what's termed as Aggregate Schemas in Clojure as was presented in this recent talk by Rick Hickey:
Last edited: