Ulf Wiger advocates for a programming model change based on the actor model which more accurately reflects old human concurrency patterns that we have used in our daily lives for thousands of years.
Ulf Wiger is the CTO of Erlang Solutions. He worked for Ericsson and was Chief Designer of the AXD 301 development. At nearly 2 million lines of Erlang code, AXD 301 is the most complex system ever built in Erlang. In recent years, Ulf has been involved in several products based on the AXD 301 architecture, and has been an active member of the Open Source Erlang community.
QCon is a conference that is organized by the community, for the community.The result is a high quality conference experience where a tremendous amount of attention and investment has gone into having the best content on the most important topics presented by the leaders in our community.QCon is designed with the technical depth and enterprise focus of interest to technical team leads, architects, and project managers.
Learn concurrency - what is the key point of ERLANG
interesting talk, especially that C++ did fail and ERLANG did not (and in former days ERLANG very much younger...).
How do I learn concurrency and use ERLANG to solve such problems? I think of some process that gets much more messages than it was supposed to get (during high peak of I/O)? It (the process or some other processes) might fail, supervisors recognize such failure, but how helps ERLANG or OTP handle this situation? What about lost messages or general network failure?
And what built-in-mechanism helps in comparision to other languages, e.g. C++? C++ allows to build actors, too. Is the answer that ERLANG/OTP does already have such mechnisms built-in? This might be a make-or-buy-decision (=>jump to ERLANG instead of solving the hard problems once more in C++)?
Re: Learn concurrency - what is the key point of ERLANG
When I was trained a programmer 30 years ago, we had 10 day technology-independent course in software design. Nowadays, it seems most programmers are taught only technologies. And I fear the OO revolution of the early 1990s (the shift to OOPLs and OO design thinking) reduced our collective intelligence about software design.
Do you agree with any of the 7 presumptions below?
Are there any you think OOPers should be taught before they are let loose?
1) Define valid input event sequences, draw state machines for complex ones?
2) Rmove redundancy in a complex state machine by dividing it into simpler smaller parallel state machines?
3) Write code that detects and reject out-of-sequence events?
4) Presume every procedure, every state machine, needs its own supervisor (fork control rather than chain communication)?
5) Presume a distributed component is stateless unless otherwise required?
6) Model input and output data streams as sequential regular expressions, and build procedure structures around those?
7) In enterprise information systems, code the business rules as close to the database as possible - even in stored procedures?
Is this a fair abstract of 3 patterns mentioned?
Client requests service, passing an invocation message with a unique reference for the service.
Client looks in its mail box for reply message with unique reference.
Client repeats until mail box contains server reply or N seconds pass.
Client spawns subordinate processes, passing each an invocation message with a unique reference for the service.
Client loops until all replies received or timeout.
If time out, then client kills subordinates and returns error message to higher supervisor.
Block out-of-sequence events.
Buffer them until they fit.