# Java 8 for Financial Services

| Posted by John T Davies 0 Followers on Nov 06, 2014. Estimated reading time: 27 minutes |

Java 8 isn’t just that latest software gadget from Oracle, it can vastly simplify your code and even make it run faster

I’m a great fan of the latest gadgets but Java 8 brings more than just new gadgets to Java. With functional programming in the form of lambdas making its debut in Java 8, this is the biggest change to the language since generics. I have worked in financial services for well over 25 years now and the move from Java 7 to Java 8 is almost as exciting as it was moving from C++ to Java itself way back in ’95.

• Start with some simple CSV data from an Excel spreadsheet, import it into a Java model and run some lambdas on the Java model containing the data
• Generate additional data by randomising it; this way we can generate a few million instead of just the 10 lines from above - better for sort demonstrations
• Finally I will dispense with the simple data and use more complex XML data from FpML, randomise that and run similar lambdas

## Setup - Read in some test data…

Let’s start with a data set we can play with, something we can refer to for the rest of this article. It’s simple so that we can keep the example simple but later on I’ll show you how everything we do with the simple version we can also do with more complex data sources such as FpML, ISO-20022, SWIFT or FIX.

First the data…

 ID TradeDate BuySell Currency1 Amount1 Exchange Rate Currency2 Amount2 Settlement Date 1 21/07/2014 Buy EUR 50,000,000 1.344 USD 67,200,000 28/07/2014 2 21/07/2014 Sell USD 35,000,000 0.7441 EUR 26,043,500 20/08/2014 3 22/07/2014 Buy GBP 7,000,000 172.99 JPY 1,210,930,000 05/08/2014 4 23/07/2014 Sell AUD 13,500,000 0.9408 USD 12,700,800 22/08/2014 5 24/07/2014 Buy EUR 11,000,000 1.2148 CHF 13,362,800 31/07/2014 6 24/07/2014 Buy CHF 6,000,000 0.6513 GBP 3,907,800 31/07/2014 7 25/07/2014 Sell JPY 150,000,000 0.6513 EUR 97,695,000 08/08/2014 8 25/07/2014 Sell CAD 17,500,000 0.9025 USD 15,793,750 01/08/2014 9 28/07/2014 Buy GBP 7,000,000 1.8366 CAD 12,856,200 27/08/2014 10 28/07/2014 Buy EUR 13,500,000 0.7911 GBP 10,679,850 11/08/2014

It’s purely fictitious data, made up in just a few minutes in Excel but we’re going to use this for the examples. To import this into Java we have a few choices; we could hand code the Trade class and hard-wire in the data, OK for a demo but not great for anything else. We could write a quick parser to read the CSV in but now we’re talking about quite a bit of code before we can start playing. My preferred way and what I recommend (even if I were not the CTO) is to use C24’s Integration Objects (download free) and simply Java Bind the CSV or XLS to generate the Java in seconds, all these demos are using C24 CSV binding.

## Creating and using a Java 8 Stream

Streams are a new way to work with data, as the name would suggest, as streams. Again like lambdas there is nothing you can do with Streams that you can’t do without them. It does however make your code a lot simpler and, as long as you understand it, easer to read and maintain. Lastly using these new constructs gives a lot more information to the compiler and JVM so further improvements can be made at runtime giving you better performance. There are a few gotchas like exception handling, debugging and infinite streams but we’ll cover those as we go along.

Take a list of currencies, we want to perhaps filter or print them out, I’m sure you can think of a dozen ways of doing it with arrays, Lists, Collections, iterators, for-loops etc. Here’s the Stream version or should I say “a” Stream version…

Stream<String> currencies = Stream.of("GBP", "EUR", "USD", "CAD", "AUD", "JPY", "HKD" );
currencies.forEach( ccy -> System.out.println( ccy ) );

If you’ve immediately spotted the forEach and said there’s a simpler way of doing that, you’re right, I wanted to show something that’s a little easier to understand. So we create a variable “ccy”, which we can call anything (typically we just use a single letter), then define what applies to that variable.

These are all equally valid…

currencies.forEach( currency -> System.out.println( currency ) );
currencies.forEach( c -> System.out.println( c ) );
currencies.forEach( x -> System.out.println( x ) );

I’m a great fan of self-documenting code so on those grounds I should really prefer the first version with “currency” but to be honest I’m starting to get used to the single letter version. My advice would be to just choose a letter that makes sense, “c” (for currency) makes more sense to me here than “x”. What’s interesting and worth pointing out is that in a Stream the variable is magically typed to the items in the Stream, in this case Stream elements (in the <>) so a String.

We can actually go one stage further with this lambda and use a method reference, the “::” syntax refers to the method (which can be a constructor, static or and instance method) to be applied to each of the elements.

currencies.forEach( System.out::println );

Now let’s try a filter…

Stream<String> currencies = Stream.of("GBP", "EUR", "USD", "CAD", "AUD", "JPY", "HKD" );
currencies
.filter( c -> c.matches( "GBP|EUR"))
.forEach( System.out::println );


The output of this is basically GBP and EUR, we could try this too…

currencies
.filter( c -> c.contains("A"))
.forEach( System.out::println );


And we get CAD and AUD.

Let’s quickly move on to our trade data and start really using the streams and lambdas…

Reading our CSV test data in code is just a single line, it’s read in and populated by the bound code that was generated by the C24 process mentioned above. We now have a Trades object that actually has an optional header and an array of Trade objects.

Trades tradeData = C24.parse(Trades.class).from(new File("tradedata.csv"));
tradeStream.forEach(System.out::print); 

What I’ve done here is create a List from the array[] using the static Arrays.asList() method and as a result we get a List of Trade objects, these are the lines of Trade data (minus the header of course). Finally we get the Stream and apply the forEach as we did above to print them all out.

1,21/07/2014,Buy,EUR,50000000,1.344,USD,67200000,28/07/2014
2,21/07/2014,Sell,USD,35000000,0.744,EUR,26043500,20/08/2014


Let’s apply a filter…

(Note: I am using "t ->" in my lambdas but I could equally well use "trade ->" or even "banana ->". There is no standard and no common practices, just choose what makes most sense, it's your code.)

tradeStream
.filter( t -> t.getID() == 9 )
.forEach(System.out::print);


and we get just the row starting with number 9.

9,28/07/2014,Buy,GBP,7000000,1.837,CAD,12856200,27/08/2014

Let’s try the currencies…

tradeStream
.filter( t -> t.getCurrency1().matches("GBP|EUR") )
.forEach(System.out::print);

1,21/07/2014,Buy,EUR,50000000,1.344,USD,67200000,28/07/2014
10,28/07/2014,Buy,EUR,13500000,0.791,GBP,10679850,11/08/2014 

OK now we just want to count the number of “Buy” trades…

long count = tradeStream
.count();

System.out.printf("count = %d%n", count); 

And we get “count = 6”, now let’s get the sum of all the GBP trades…

BigDecimal total = tradeStream
.filter(t -> t.getCurrency1().matches("GBP"))
.map(t -> t.getAmount1())

System.out.printf("total = %,d", total.intValue()); 

Firstly a filter to apply just to the GBP values (yes we’ve ignored the other side of the trade for simplicity), then the “map” basically turns the stream from a stream of Trade objects to a stream of BigDecimal objects, the output of getAmount1(). Finally reduce() initialises itself with the first value BigDecimal.ZERO and then performs the BigDecimal::add method on each member of the stream. And the result, as, hopefully expected…

total = 14,000,000

We could have done this a number of different ways. The following gets the double value from the amount1() and then uses a slightly different stream that adds a sum() method. The warning I would give over this is that we’re casting to double, which is not good for financial operations. The result, for this small demo is luckily the same but with larger volumes we could see errors accumulating with the rounding and precision of double.

double total = tradeStream
.filter(t -> t.getCurrency1().matches("GBP"))
.mapToDouble(t -> t.getAmount1().doubleValue())
.sum();

System.out.printf("total = %,d", total);


Our trades are sorted but let’s shuffle them up a bit so that we can demo the sort…

Trades tradeData = C24.parse(Trades.class).from(new File(fileName));

.forEach(System.out::print);


I’ve highlighted the two lines, first the standard (old) Java shuffle and then the new sort passing in the attribute or column I want to sort on.

Let’s look at a few other features before we get into higher volumes and more complex messages. We have predicates where we can see if any or all of the trades match the predicate, let’s check that all the amount calculations are correct…

boolean match = tradeStream
.allMatch(t -> t.getAmount1().multiply(BigDecimal.valueOf(t.getExchangeRate()))
.compareTo(t.getAmount2()) == 0);
System.out.println("allMatch = " + match);


Stream.allMatch() simply runs the predicate agains all of the items in the stream and returns a boolean, true if they all match. What we’re doing here is checking that the first amount multiplied by the exchange rate is equal to the second amount. We use BigDecimal here because that’s just what you do in financial services, we have to have precise control over every penny or cent and IEEE double can give us a few errors after a while.

We could also put the predicate into a method to re-use it another time…

private static Predicate<Trade> rateCheck() {
return t -> t.getAmount1().multiply(BigDecimal.valueOf(t.getExchangeRate()))
.compareTo(t.getAmount2()) == 0;
}


and then just call it…

boolean match = tradeStream.allMatch(rateCheck());
System.out.println("allMatch = " + match); 

Note that I use static here purely because I’m writing the code in main() for this article, no other reason.

The predicate could also be part of the Trade object but we can also reuse a validation method on the Trade object and the predicate would simply be that the result of the isValidate() method is valid or true. C24 provides very powerful validation built into the models, particularly useful for FpML, FIX, ISO-20022, SWIFT and other standards requiring complex semantic validation in addition to syntactic validation.

We have noneMatch()…

boolean match = tradeStream
.noneMatch( t -> t.getTradeDate().getTime() > t.getSettlementDate().getTime());
System.out.println("allMatch = " + match);


This just checks that none of the trades have a trade date greater than their settlement date, not using the new Java Date classes but the good ol’ java.util.Date. Finally there’s anyMatch() but hopefully by now you’re getting the picture.

Let’s drop the example file of just 10 trades and use another useful method called generate() to create a larger stream. The reason we’re doing this is to demonstrate the performance enhancements we can get from using parallel operations.

First we need something that creates new Trade objects, I’ve basically randomised the content of the trade in a new method createTrade(). To fit it on one page I’ve taken the comments out but I think you’ll find it largely understandable with just pure code. It goes through each of the fields in the trade and creates a new random one. For the currencies we need to make sure the second isn’t the same as the first so we loop until its different and do the same with the tradeDate making sure it’s not a weekend. Finally I’ve used fixed values for the exchange rate but randomised them slightly using a Gaussian distribution with a standard deviation of 0.5% and then limited it to 5 significant figures.

Running this we get something like this, obviously each time we run it it’s different, it’s random of course…

1,25/08/2014,Buy,CAD,3000000,0.82775,CHF,2483250,15/09/2014
5,19/08/2014,Sell,AUD,7000000,0.68886,EUR,4822020,02/09/2014 

Creating a stream from this is very simple, we can use Stream.generate()

// Our random Trade creator has the signature “Trade createTrade()”

});


Using this stream of randomly generated trades we can do everything we did above on the small sample. Note however that the results that I print here will not necessarily be the same as yours. There is one catch though, if you were to run this…

tradeStream.forEach(System.out::print);

you would have a lot of output and it simply wouldn’t end. Similarly if we were to calculate the sum or count the number of items we’d never return a result so we need to limit the stream’s output; limit(n) does the job nicely.

tradeStream
.limit(100)
.forEach(System.out::print); 

Now that we can generate a larger number let’s get a list of 1,000 trades of just Buy/Sell GBP to USD. I am using a Collector this time to collect all the results into a List using toList(). One reason, apart from demonstrating it here, is that we can use the result more than once as we print out the results. The down side is that each Trade is now stored in memory and we’ve lost one of the advantages of streams.

List<Trade> gbp2usdTradeList = tradeStream
.filter(t -> t.getCurrency1().matches("GBP") && t.getCurrency2().matches("USD"))
.limit(1_000)
.collect(Collectors.toList()); 

And to print out the first three and the last three…

gbp2usdTradeList.stream().limit(3).forEach(System.out::print);
System.out.println("...");
gbp2usdTradeList.stream().skip(997).forEach(System.out::print); 

We get, or at least I get (as yours will have different numbers)…

20,28/08/2014,Sell,GBP,34000000,1.68473,USD,57280820,11/09/2014
29,11/08/2014,Sell,GBP,18000000,1.69772,USD,30558960,18/08/2014
...
40672,07/08/2014,Buy,GBP,40000000,1.68191,USD,67276400,14/08/2014 

Let’s test the parallel sorting now, the data is already sorted by tradeId and the amounts are not terribly unique so let’s sort on the exchange rate, first serially (not in parallel)…

long start = System.nanoTime();
.filter(t -> t.getCurrency1().matches("GBP") && t.getCurrency2().matches("USD"))
.limit(1_000_000)
.limit(3)
.forEach(System.out::print);

System.out.printf("time = %.3f%n", (System.nanoTime() - start) / 1e9);


And I get…

37387422,29/08/2014,Sell,GBP,18000000,1.64245,USD,29564100,05/09/2014
24092486,11/08/2014,Sell,GBP,18000000,1.64346,USD,29582280,18/08/2014
time = 91.153 

long start = System.nanoTime();
.filter(t -> t.getCurrency1().matches("GBP") && t.getCurrency2().matches("USD"))
.limit(1_000_000)
.parallel()
.limit(3)
.forEach(System.out::print);

System.out.printf("time = %.3f%n", (System.nanoTime() - start) / 1e9); 

I get…

23330640,25/08/2014,Buy,GBP,16000000,1.64217,USD,26274720,15/09/2014
7073144,22/08/2014,Sell,GBP,33000000,1.64487,USD,54280710,12/09/2014
time = 29.270 

Again remember that each time I run this the Trades are generated so the results will not be the same, at the sort of volumes we’re working with though, 1 million trades from 42 million (roughly) generated everything time-wise certainly is going to be averaged out.

My machine is a 4 core (hyper-threaded) MacBookPro so this 3 fold performance increase is about what I’d expect and impressive going for adding just one method call. What’s happening behind the scenes is the new fork/join is being used. It’s worth pointing out that I wouldn’t see this sort of gain if I hadn’t first filtered the data, simply because the bottleneck would have been the stream generation.

Let’s take a look at some more complex stream and lambda operations…

Let’s count the number of each currency pair using a groupBy operation, this is similar to what you’d do in SQL…

select CCY1,CCY2,count(*) from Trades group by CCY1,CCY2

Now in Java using Streams and lambdas…

Map<String, Long> map = tradeStream
.limit(1_000_000)
.collect(Collectors.groupingBy(t -> t.getCurrency1() + "/" + t.getCurrency2(),
Collectors.counting()));

System.out.println("map = " + map);


And we get (or at least I got)…

map = {AUD/JPY=23816, USD/JPY=23706, AUD/GBP=23949, USD/GBP=23745, CHF/GBP=23666,
JPY/CHF=23864, EUR/CAD=23934, CHF/JPY=23844, CHF/AUD=24077, EUR/USD=23934, USD/AUD=23982,
GBP/EUR=23564, EUR/AUD=23568, USD/EUR=23606, GBP/CAD=23735, GBP/USD=23676, JPY/GBP=23551,
JPY/AUD=23759, CHF/EUR=23780, EUR/GBP=23975, GBP/AUD=23831, GBP/JPY=23606, CAD/AUD=23752,
GBP/CHF=23927}

The groupingBy() creates a map, in this case Map<String, Long>, the String comes from the groupingBy() and the Long from the Collectors.counting().

A little further now, we’ll groupBy currency (just Currency1) and then groupBy Buy/Sell and finally aggregate the amounts (Amount1).


Map<String, Map<Object, BigDecimal>> map = tradeStream
.limit(1_000_000)
.collect(
Collectors.groupingBy(t -> t.getCurrency1(),
Collectors.reducing(
System.out.println("map = " + map);

And the output…

map = {AUD={Sell=1826959000000, Buy=1822442000000}, CHF={Sell=1818975000000,
Buy=1826496000000}, USD={Sell=1817626000000, Buy=1820617000000}} 

If you were wondering how you might debug this, here’s a tip. Use peek() but remember that you can’t put conditionals in so you can have if(t.something() < 5) print(t). The best plan is to have a method to do that like so…


Map<String, Map<Object, BigDecimal>> map = tradeStream
.limit(1_000_000)
.peek( t -> occasionallyDebug(t) )
.collect( Collectors.groupingBy(t -> t.getCurrency1(),

And the method/function…/p>


if( trade.getID() % 100_000 == 0 ) {
}
}


In real life we could use this calculation for position keeping. We could do it by counter-party, by currency and of course by date.

## High volumes and complex XML messages

So far we played with simple trade models. It’s usually the easiest way to understand, but we’re now going to step things up and move to real trades, defined in FpML. Well when I say “real” I mean real-looking, trades, naturally we’re going to have to randomise them again.

This is the example I’m going to use, it’s several pages so I won’t waste space here printing it…

In the TradeHeader I’m going to change the two TradeId values from TW9235 and SW2000 to “Party1-1234” and “Party2-1234” where the “1234” is the index of the generated message and I’m going to add a random date (again a weekday) from 2013 into the TradeDate.

Then I’m going to randomise the InitialValue with a value from 0 to 10 million (with 2 decimal places). This occurs in two areas (two of the SwapStreams) so both will be changed.That is all I will randomise for this though, as any other values just make it pointless for what we’re going to look at.

FpML is pretty complex; messages can have 13 levels of hierarchy, which is why I didn’t start with it. Using Java is far easier than a relational database for this sort of thing. We can use a hierarchical XML binding to work with the XML. We could also do this with XQuery and XPath but they are both very XML centric languages and not why we’re here. Inside FpML are several substitution groups, these are a little like references to an interface where the implementation is defined at run-time so we have to also navigate these as well as sometimes cast interfaces to concrete classes in order to use the right getters. We have a lot more about this on our web site so we’ll skip any more detail at this point.

Reading the FpML template (the one in the link) is very simple, we use the same API as with the CSV file…

File XML_INPUT_FILE = new File("valid-ird-ex01-vanilla-swap.xml");

Fpmlmain54DocumentRoot message = C24.parse(Fpmlmain54DocumentRoot.class).from(XML_INPUT_FILE);

Trade trade = cdo.getDataDocument().getDataDocumentSG1().getTrade()[0];


Setting the two initial values…

Swap swap = (Swap) trade.getProduct();

BigDecimal value = BigDecimal.valueOf(Math.random() * 10_000_000).setScale(2,
BigDecimal.ROUND_UP);

swap.getSwapStream()[0].getCalculationPeriodAmount().getCalculation().getCalculationSG1()
.getNotionalSchedule().getNotionalStepSchedule().setInitialValue(value);

swap.getSwapStream()[1].getCalculationPeriodAmount().getCalculation().getCalculationSG1()
.getNotionalSchedule().getNotionalStepSchedule().setInitialValue(value);


Naturally we could write a little method to hide some of this complexity, which is exactly what I did for the lambdas we’re going to use in a few paragraphs.

private void setInitialValue( Fpmlmain54DocumentRoot message, int index, BigDecimal value ) {
Swap swap = (Swap)

swap.getSwapStream()[index].getCalculationPeriodAmount().getCalculation().getCalculationSG1()
.getNotionalSchedule().getNotionalStepSchedule().setInitialValue(value);
}


I should also point out that this helper method can actually be added the the FpML model in C24’s studio, meaning that we can add a “virtual” InitialValue to the root of the message with getters and setters, similar to below. This vastly simplifies the code, both for traditional Java and our new lambdas. We can now do the following…

message.setInitialValue( 0, value );
BigDecimal value = message.getInitialValue( 0 );

So we’ve got the message with randomised data we just need a few thousand of them now. To do that we duplicate them and add them to a List.

private static final int ARRAY_SIZE = 10_000;
private static List<Fpmlmain54DocumentRoot> messageList = new ArrayList<>(ARRAY_SIZE);



Let’s start working with the messageList.

We’re going to loop through the messages and count the number of trades with a value over 9.9 million, remembering that they’re a lot more complex now.

long start = System.nanoTime();

long count = messageList.stream()
.map(t -> t.getInitialValue(0))
.filter(v -> v.compareTo(BigDecimal.valueOf(9_900_000)) > 0)
.count();

System.out.println("count = " + count);

double seconds = (System.nanoTime() - start) / 1e9;
System.out.printf("Time to process: %,d messages: %,.3f seconds (%,.0f per second)%n%n”,
ARRAY_SIZE, seconds, ARRAY_SIZE / seconds);


I get the following with 10,000 FpML messages. I should point out that I didn’t do any JIT warmup so it’s just indicative.

count = 123
Time to run: 10,000 messages: 0.028 seconds (362,371 per second)


We’ll come back to the performance in a second. Let’s now try to sum all the trades from the month of July…

BigDecimal result = messageList.stream()
.map(t -> t.getInitialValue(0))


Again the performance is similar. What I’d like to do now is demonstrate the parallel performance. All we need to do is use a parallelStream()…

for( int loop = 0; loop < 10; loop++ ) {
start = System.nanoTime();

result = cdoList.parallelStream()
.map(t -> t.getInitialValue(0))
seconds = (System.nanoTime() - start) / 1e9;
}
System.out.println("result = " + result);
System.out.printf("Time to process (parallel): %,d messages: %,.3f seconds (%,.0f per
second)%n%n”,
ARRAY_SIZE, seconds, ARRAY_SIZE / seconds);

What I’ve done here to get a better timing result is to loop the test 10 times and just take the last result. Remember that if you’re using -server in your JVM settings that the default is 10,000 iterations before the code is compiled by JIT.

I ran this serially and parallel with 100,000 messages…

result = 44656591648.06
Time to process (serial): 100,000 messages: 0.028 seconds (3,561,634 per second) 
result = 44656591648.06
Time to process (parallel): 100,000 messages: 0.007 seconds (13,713,659 per second)

## Reducing memory usage

As you can see we have quite an impressive performance with the parallel stream. If you tried running this you may have noticed that you’d need quite a bit of RAM and some large Xms/Xmx settings. The reason for this is that the messageList requires all of the messages to be in memory. Binding FpML to Java results in message objects that are a good 15-25k in size, create 100,000 of these and we need a good 2 GB of RAM.

We believe we’ve solved this problem with a new Java binding technology that binds complex models directly to binary. With the code above but using this new binding (no change to the code, just the libraries) we can get over 40 times more messages into RAM. I was able to run the test above with up to 20 million FpML trades in memory on my laptop.

## Analysing memory and GC performance

I said I’d briefly touch on GC, memory and performance measurement. I’ve used a number of tools, but what I tend to use now is the jVisualVM and Oracle’s new Java Mission Control (“jmc” for short and on the command line) for code profiling and jClarity’s Censum for GC profiling, read more here.

(Click on the image to enlarge it)

These are the only tools I found that give accurate results especially with Java 8 and the G1 GC.

JMC is basically like jVisualVM on steroids. Oracle are obviously looking to make money from this and it looks like it may well become a useful tool. The licensing is a little strange though and I’m not even sure I should be putting screen shots in here. If they complain I’ll simply not promote it.

You can use it without a license (with restrictions) but you do need to add a few -XX and -D parameters on startup. It is, as you can see, quite sexy though. I have to point out the gotcha with these sorts of tools though and that’s basically that they seriously effect your runtime performance. For this reason I prefer to use them for code profiling and not performance baselining. Of course one leads to the other so you get there in the end. You can however get a very good idea of what’s going on in your code. The level of detail is perfect for seeing where large amounts of memory are being allocated or parts of your code that are spending too long polling or waiting on IO.

(Click on the image to enlarge it)

(Click on the image to enlarge it)

jClarity’s Censum: this is a GC log file analyser - a few parameters in the JVM start up again and you can do a very neat post-mortem analysis of the log files. This is far less intrusive than run-time analytics and by far the best and most accurate mechanism I’ve found for looking at memory and GC behaviour. On the right you can see the post GC heap usage slowly climbing as we create the 2 million messages. Finally a satisfyingly flat plateau at about 1.08GB. This was using a 4GB heap, obviously the total usage doesn’t change with the heap size but the performance does, especially the parallel performance.

(Click on the image to enlarge it)

On the left you can see the GC pause time during 2 million tests. The first part “dancing around” up to about the 170 seconds mark is the data generation before the test. The pause times hit a maximum of 70ms during the data generation but remain in single milliseconds for the queries / searches, which is good. Censum, combined with expertise from the jClarity guys is the perfect way to test out your code for performance.

Finally the allocation rate, creating the trades was more complex so the allocation rate is relatively low. Running the tests increased it to about 800MB/sec and then finally significantly higher during the parallel test. Normally I wouldn’t want to see such a high allocation rate but in this case we’re using an API that returns a BigDecimal, which in Java terms is a monster. If the API returned a double or float then we’d see virtually zero allocation with the exception of anything created by the stream or lambda. We’ve just touched on the streams API and lambdas. My goal was to give you a quick introduction and to show you how it might be applicable to financial services. I’ve personally found hundreds of resources and examples on the Internet although getting used to the syntax can be a little daunting at first. I found the best method was to start with something simple like this and try out what you want first.

(Click on the image to enlarge it)

We started with a few lines of fictitious trades, we moved up to a million or so and touched on parallel streams. Finally we vastly increased the complexity to FpML trades (Interest Rate Derivatives) and mentioned some clever memory compaction (using a binary codec) to extend the ability for a single JVM to parallel search well over 10 million FpML trades in just the memory of a laptop.

Java 8 is the new tool for programming but the skill in producing what’s required remains in the hands of the programmer. Java 8 combined with frameworks like AKKA and Spring and performance enhancements like binary binding the skilled programmer can now create a masterpiece.

To learn more about C24 Technologies, C24 Integration Objects and C24’s SDOs including data-sheets, code reference implementations, and more technical information, please visit this website.

John Davies is CTO and co-founder of C24, John’s background as well as C24’s is deep financial services, Java binding to message standards like SWIFT, FIX, FpML and ISO-20022. In the past John was Chief Architect at JP Morgan and BNP Paribas, Technical Director of IONA & Progress Software and founding architect of Visa’s V.me, now Visa Checkout. John has co-authored several enterprise Java and architecture books and is a frequent speaker at banking and technology conferences.

Style

## Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

## Get the most out of the InfoQ experience.

### Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

GCViewer also will handle java 8 logs

The open source GCViewer can handle parsing GC logs from Java 8

github.com/chewiebug/GCViewer

Re: GCViewer also will handle java 8 logs

Censum handles Java 8 and is capable of handling far more complex log formats.

Source code for the sample

Thanks John for very good article with meaningful sample example for Lambada .. :)

One more thing, Is it possible to get source code of the sample example used? It will be useful for me and all to play with sample with other lambada expression.

Re: Source code for the sample

Hi Nurali,
Thanks for your comments, I originally put the code in the article but to make it palatable and easier to read we decided to take it out. Most of the one or two line examples I simply wrote on the fly but I do have a basic shell for the CSV generator and many of the Java 8 examples which I'd be very happy to send you or anyone else. I'm writing another paper on high performance Java using binary codecs in place of objects which also uses Java 8 so I will post the source code on GitHub very shortly. The benchmarks are pretty impressive, watch out on InfoQ in the next few weeks.

Thanks again,

-John Davies- (email is John dot Davies at C24 dot biz)

Re: Source code for the sample

Thanks John for your reply. I will have my look on your next article on this topic.
Close

#### by

on

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

5 Discuss

Login to InfoQ to interact with what matters most to you.