BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Unshackle Your Domain

Unshackle Your Domain

Bookmarks
59:30

Summary

This presentation, from QCon SF 08, analyzes real world projects where using explicit state transition models was made and the many interesting modeling/architectural possibilities that arose from the decision. Along the way, the IMIS system and its performance is linked to explicit state transition modeling.

Bio

Greg Young is co-founder and CTO of IMIS, a stock market analytics firm. He has 10+ years of varied experience in computer science from embedded operating systems to business systems. You can often find Greg on experts-exchange.com where he runs the .NET section of the site. He is a frequent contributor to InfoQ.

About the conference

QCon is a conference that is organized by the community, for the community.The result is a high quality conference experience where a tremendous amount of attention and investment has gone into having the best content on the most important topics presented by the leaders in our community. QCon is designed with the technical depth and enterprise focus of interest to technical team leads, architects, and project managers.

Recorded at:

Jun 24, 2009

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Confirms my beliefs too.. but some Queries

    by Dinkar Gupta,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Nice session Greg! I completely agree to the points you mentioned. I just thouhgt maybe the example ended abruptly.. it would have been nice if we would have procceded litle ahead from the talk on present vs post (the anti corruption part and more).. was it intentional or due to time constraint. I would like to see how the event were then processed within domain and then how the applied updates reached the other buounded context.



    I have one Question which I often face. This is regarding the usage of DTOs and Domain Entities & Aggregates. It is argued number of times that this conversion of DTOs to Domain and Domain to DTO is an unnecessary overhead and also results in duplication of code and effort. My facades receives a DTO which is then mapped to a domain object (may by a domain object builder/mapper) and supplied to the Domain class for processing. Reverse happens when the domain object is supplied by the domain service that I invoked from the facade - I will use a DTOAssembler to assemble the DTO from domain object supplied.
    Should one use the exposed domain object mechanism ? Maybe due to poorly designed models most of the time at least 70% of the DTOs are exact copies of the domain entities. That's where this model is questioned.

    What's your view on that ?

  • Re: Confirms my beliefs too.. but some Queries

    by Vaughn Vernon,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Sorry to barge in, but Greg doesn't use DTOs in the way you describe. As you can see from the presentation he selects DTOs directly out of the reporting database (maybe through N/Hibernate). As he stated, changes to the domain model are only don't via commands. You could view the message payload of the commands as DTOs, but they are different. And as you saw a single incoming change transaction could consist of multiple messages, all with different concerns for altering the state of the domain model.

  • Re: Confirms my beliefs too.. but some Queries

    by Greg Young,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Dinkar,

    I would suggest that you are losing an important piece of business information by sending DTOs up and down and just mapping them to domain objects. If you are finding that your models are not very different then maybe this is a place where you could be doing something simpler? Active record or various CRUD scaffolding systems come to mind.

    Cheers,

    Greg

  • Re: Confirms my beliefs too.. but some Queries

    by Dinkar Gupta,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Thanks for the clarification. I was referring to what Greg mentioned as the typical scenario in the presentation. I have seem atleast a dozen such typical implementations where the DTOs were repesenting the structure of data as the client wants to see it. While the domain that's producing it has a different strucure all togather - with the data source abviously aligned to model of domain than to the DTO structures. In such cases (and in places where 2 different data sources are not possible to implement) this problem does arise pretty often.




    The problem was that the DTO's were always part of the data exchange contract between the client and the facades and carrying those over to data access layer seemed like coupling the data access and client requests. instead the model facades mapped the DTOs to what domain model accepted and the domain model objects were used by the DAL. But I do see the value in segregating the facades and underlying model in line with command query separation. I guess now the active records, value object model and scaffoldings (non transaction side) and domain model (transaction side) makes more sense. That is really some good advice and I am going to follow it.

    Thanks for sharing.

  • The Speaker is hot

    by Gustavo Hernandez Minetti,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    ;-))

  • Historic Modeling

    by Michael Perry,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    This is validation of an idea that I use frequently. I call it Historic Modeling. Historical facts are changes to a system. Facts have predecessor/successor relationships. Model a system as a partial order of related facts.



    I've defined a set of rules and a framework for Historic Modeling:

  • Anti-Corruption Layer between Command and Query DBs

    by Michael Paterson,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Hi Greg,

    My team and I are in the process of designing a message oriented architecture much like the one you described. In fact we took quite a few ideas from you! You mentioned an anti-corruption layer between your Command and Query handlers / databases. Where does that live? I was considering a separate handler that would be triggered after the command handler did its business as we may not have full control over our databases...

    Thoughts?

    Thanks,
    Mike

  • Re: Anti-Corruption Layer between Command and Query DBs

    by Greg Young,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Mike,

    Sorry for the delay, I did not have email updates on, someone happened by here and sent me an email letting me know you had a question (thanks anonymous person).

    Here is the general flow for me ...

    UI sources a command ... pushes to domain.
    Domain receives the command, maps to a command handler (aka application service, though in some cases you can map directly to aggregates)
    Command handler maps the data in the command to a call on an aggregate root.
    Aggregate root if the command succeeds stores events representing the changes that occured fromt he command.
    When the aggregate root is saved to the repository the events are published.

    That is the end of the life cycle within this bounded context (and the events go to say a bus). On the other end we now have a receiver (this is where the anti-corruption layer exists). As an example consider a ordering bc and a shipping bc ... the shipping bc is subscribed to events from the ordering bc, when it receives the events it takes the command, maps it to its own command handler and uses the event to create a parallel model of the ordering bc (in other words how "it views" the data of the ordering BC.

    There are lots of reasons why this is a good thing to be doing availability, scalability, and the ability to host servers in varying locales all come to mind (for more see Pat Helland "Normalization is for sissies" which is a great paper on the subject).

    HTH

    Greg

  • Re: Anti-Corruption Layer between Command and Query DBs

    by Greg Young,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    egad I should have used
    s

  • Re: Anti-Corruption Layer between Command and Query DBs

    by Greg Young,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    LOL < BR >

  • Model changes affecting the audit trail?

    by Marc Grue,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    How can the current state be reconstructed through old commands with only todays model? Last years model would probably have given another output given the same 1 year old command. I'm probably missing the obvious... :)

  • Re: Model changes affecting the audit trail?

    by Alex Cruise,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I'd like to see Greg's answer to this too!

    In any case, I'd imagine that such structural changes need to go into the audit log too, with a monotonically increasing "generation count" assigned to each successive migration. Of course, for another layer of sanity checking, you'd probably want to attach the id of the generation that was current as of the time of the change to every audit record too.

    And declarative migrations bring up all kinds of entertaining metamodel issues!

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT