James O. Coplien and Trygve Reenskaug have recently published the first article of a series that will introduce the new architectural approach to object oriented programming based on Data, Context and Interaction (DCI) pattern.
In this first article, the authors argue that, though object oriented programming is instrumental for capturing structure, it doesn’t allow fully expressing user mental models because it fails to represent “end user behavioral requirements”. To illustrate what they actually mean by “behavior”, they take an example of a Savings Account object that can, for instance, decrease its balance and do a withdrawal. According to Coplien and Reenskaug, “these two behaviors are radically different”: “decreasing the balance is merely a characteristic of the data: what it is. To do a withdrawal reflects the purpose of the data: what it does”. The fact of being able to reduce balance characterizes data in any situation – it is stable. Withdrawal, on the contrary, involves “interactions with an ATM screen or an audit trail” – it is dynamic, it is no longer about “being” but rather about “doing”.
While the user model naturally combines the being and the doing parts, “there is little in object orientation, and really nothing in MVC, that helps the developer capture doing in the code.” “Object-orientation lumped [these two actions] into the same bucket” making it difficult to separate “simple, stable data models from dynamic behavioral models”, which is though essential from architecture and maintenance perspective. Moreover, pure object orientation requires splitting up large algorithms and distributing their parts – methods - to objects that are most tightly linked with a given method. However, while some algorithm can live within a single object, “interesting business functionality often cuts across objects.”
To represent these dynamic behavioral models, James and Trygve advocate for using the DCI model that is based on three concepts:
- The data, that is expressed with domain objects representing the stable parts;
- The interactions, expressed in terms of roles that are “collections of behaviors that are about what objects do”;
- The context, that can be viewed as “a table that maps a role member function (a row of the table) onto an object method (the table columns are objects). The table is filled in based on programmer-supplied business intelligence in the Context object that knows, for a given Use Case, what objects should play what roles.”
To provide readers with a concrete illustration, the authors use an example of Money transfer Use Case. Even though the transfer would involve the savings account and the investment account, within this precise Use Case, the user will rather reason in terms of “source account” and “destination account”. These are roles and the interactions of Money transfer can be described through their algorithms. These roles can then be played by different objects depending on context: in this precise example a source account role will link to the savings account object.
A general design concept that would allow representing roles in code would be a trait but its implementation would depend on constructs that exist in a given programming language: traits in Scala, Squeak Traits in Squeak Smalltalk, templates in C++, etc… The greatest advantage of this approach is that the example of code provided by the authors is “almost a literal expansion from the Use Case”:
That makes it more understandable than if the logic is spread over many class boundaries that are arbitrary with respect to the natural organization of the logic—as found in the end user mental model.
This article triggered a great number of reactions and critics that allowed James and Trygve to provide some precisions about DCI concept.
Michael Feather and many other commentators argue that assigning the responsibility for transfer to the source account is arbitrary and doesn’t really fit users’ mental model where transfer is not done by either account but rather a bank or “transaction objects which map to the user's conception of an interaction”. John Zabroski, for instance, suggests using the analysis class TransferSlip. Some other argue that DCI relates to things that people already know : “traits” in some language, “the general idea [of functional programming] that algorithms matter and should be able to be clearly expressed”, etc…
James O. Coplien responds that DCI “tries to reproduce the convenience of algorithmic expression that procedural languages [e.g. Fortran] used to give us combined with many of the good domain modeling notions from 1980s object orientation.” Traits in languages like Scala are a “way of rendering the solution” but different constructs can be used in other languages in order to yield DCI architecture. What counts indeed is not the tool suggested or the example used but the architectural approach of separation between: 1) behavior that is specific to the domain object whatever the situation is, and 2) behavior that is context-specific, that belongs to business logic and often cuts across objects. As Bill Venners puts it, “if the account concept is involved in 10 use cases of your application, you may end up placing some behavior for each of those use cases into class Account” and this is a big challenge for the designer. So letting “an object have a different class in each context” by applying DCI is “an attempt to improve the understandability of OO programs”:
[…] this article points out that sometimes you can end up wanting to put too much [behavior on] objects, and that different subsets of all that behavior may be needed in different contexts. [The authors suggest that] you model that extra stuff in traits, and that the traits would map to roles in the user's mental model. And then in a particular context, or use case, you add on the traits that you need for that context to the dumb domain objects.
To insist on readability that is yielded by DCI, Coplien points out four reasons why it renders code easier to read and to debug:
1.The context switches across business functions are fewer and more closely follow the mental model (role-based) than the programmer model (domain-based);
2. Inclusion polymorphism is almost completely gone. When I call foo I get foo: not one of many foos up and down the subtyping hierarchy.
3. I can find a test point for something of business value: that is, I can really do BDD. That makes it easier to develop test cases to support debugging.
4. I do less run-time debugging because the code is more readable at compile time.
Trygve Reenskaug stresses that to understand DCI, one needs to “lift one's eyes from the class abstraction and open one's mind to an additional abstraction that applies to such object structures” and “to add an object abstraction that augments the class and that retains object identity”: a role.
Community comments
Discussion over at Artima
by Dan Tines,
CLOS?
by Dan Tines,
Re: Clojure
by Francisco Jose Peredo Noguez,
Re: CLOS?
by Sadek Drobi,
do not make things complex
by cai chao,
Re: do not make things complex
by James Coplien,
Re: do not make things complex
by Sadek Drobi,
Re: do not make things complex
by cai chao,
Discussion over at Artima
by Dan Tines,
Your message is awaiting moderation. Thank you for participating in the discussion.
www.artima.com/articles/dci_vision.html
Some heavy criticism over there.
CLOS?
by Dan Tines,
Your message is awaiting moderation. Thank you for participating in the discussion.
I tend to agree with the sentiment that DCI tries to convey, even though I wasn't a big fan of the paper they presented.
But it almost seems like the problem they have is with the OO systems of C#/Java, etc....
What happens when you have an OO system like CLOS where data is separated from methods (generic functions) and you have multiple-dispatch?
Re: Clojure
by Francisco Jose Peredo Noguez,
Your message is awaiting moderation. Thank you for participating in the discussion.
Or like Clojure
Re: CLOS?
by Sadek Drobi,
Your message is awaiting moderation. Thank you for participating in the discussion.
Part of it is a matter of code organization. So for this part the question will be where do I put code. Data structure could carry functions (or methods), and these functions are directly related to data. They live in the data and they are defined in the same scope (file) as data. When you need to look at functions that are directly related to data, you know where to find them. These are in a sense methods defined inside classes or objects in OOP, and functions that are defined in the same module as data structures or as part of records in FP languages.
Another kind are functions that are related to a context. They often operate on several instances of data structure and they live in the part of the program that is concerned by the context. In my favorite programming language Haskell this can be represented using Type Classes. And in Scala they are traits. I guess differentiating explicitly the two kind of function is important for code organization and readability even before talking about user's model. Now once User's Model involved, the discussion get's another direction which is subjective, philosophical at best and leads no where IMHO as everyone one of us is a user and can claim some distinct model.
Worth noting though that I effectively agree with the illustrated points about what OOP mainstream implementations lack (including algorithm structure and instance/class differentiating) and as I mentioned I see that some of these issues are well addresses in languages like Haskell and Scala.
do not make things complex
by cai chao,
Your message is awaiting moderation. Thank you for participating in the discussion.
I do not think separate the data and the procedure is a good idea. but some time we can create some service class. the example in the DCI article, we can use a TransferService class provide the transfer logic it is simple and easy to maintainment. To explicit the role, I think use the interface is much better, we can define the interfaces, class can implement multi-interfaces, so object can play the different role in the different context.
Re: do not make things complex
by James Coplien,
Your message is awaiting moderation. Thank you for participating in the discussion.
Complexity is in the mind. Current object orientation creates accidental complexity: Methods must align with class boundaries, and class partitioning is arbitrary with respect to our mental models of algorithms. That's why, in a true OO program, each method is so small — and while things are simple locally, it violates Occam's razor in the large. DCI, by aligning the form of aggregate behavior with roles instead of classes provides a more direct mapping onto the end user cognitive model. Furthermore, it slices the design into "shear layers" that tend to evolve at the same rate. Classic OO combined the rapidly changing object behaviors with slowly changing data, all in the same thing called a class. The result was complex. DCI addresses that.
Read the article; this facet is also explained there.
Re: do not make things complex
by Sadek Drobi,
Your message is awaiting moderation. Thank you for participating in the discussion.
@chao
Neither the article nor myself argued about completely separating data and logic. Rather the article argues for keeping data logic tied to data, but separating context logic and algorithms from data structures so that you don't end up with your objects filled with a lot of methods for handling different contexts.
You suggest introducing a "Service Layer" for putting business code there. First you do not want to empty your objects from any logic because then you will end up with either long methods and code duplication or introducing yet another procedural layer (service layer being the first).
Now you can argue about leaving object logic inside objects but grouping business logic into your "service layer" classes. Now that is clearly not OOP but rather procedures grouped into modules (classes in your case) and interfaces aren't roles of objects in your case but plain java interfaces. That is not a bad thing per se, however this approach will soon end up with at least two problems IMO:
- Either writing services that do nothing more than delegating or using arbitrarily two paradigms in your upper layers (controllers) sometimes passing by a procedure and sometime playing with objects
- Code duplication since you can not have nested roles
- missing the opportunity to capture User's Model as intended in a so called OOP program
Just my 2 cents!
Re: do not make things complex
by cai chao,
Your message is awaiting moderation. Thank you for participating in the discussion.
I think the service object/layer can call the methods provided by other objects(may call them through the interface according to the context, the object may play different role; after all, almost all the modern OO language support interface).
By the way, according to the DDD, we can know the service is also the domain object in the special domain.
in my opinion:
1 reusable:
-services/methods can be reused in the different service, the small,simple logic is more reusable
-one object can implements multi-interface, so the services/methods can be reused in the different role.
OOP didn't say that you can just extension by inhert, you also can use composite(including the service provided by others).
2 flexible and maintainable:
- layers structure and inteface diriven are flexiable, maintainable and testable
I have read the article, IMO that way will lead to more classes and not maintainable; I also think it will take much time to anaylsis and design. Sometime, you can just use the decorator pattern to realize what the article said, it is more easier and agile.