Adam Bien, an independent consultant and pioneer of zero dependencies in the enterprise world of Java, highlights the benefits of consistently using standards, regardless of whether they involve Java or existing patterns. He argues that by doing so, he managed to future-proof the systems he built, preparing them for the cloud era and even for the AI-Native era.
Key Takeaways
- Sticking to Vanilla Java, such as the standard library, Jakarta EE, and MicroProfile, with zero or very few dependencies can outlast trend‑driven stacks, simplify upgrades, and even make security and certifications easier.
- Quarkus managed to close the gap between Java standards and the cloud, achieving fast boot times, an improved developer experience, and even lower cloud bills while keeping external dependencies low.
- The usage of the simple vertical slicing imposed by the Boundary‑Control‑Entity‑Entity pattern, corroborated by publicly available Java specs, enables LLMs to generate production‑ready Java code.
- When working with LLMs for code generation, moving away from a monolithic configuration file to a set of lean, task‑specific skills (e.g. microservices, CLIs, tests) improves the reliability of the output on large codebases.
- OpenTelemetry, Java‑based GPU tooling like TornadoVM, and zero‑dependency Java 25 scripts open the door to observable, AI‑enabled Java systems that can run both in the big clouds and in sovereign on‑prem environments.
Subscribe on:
Transcript
Olimpiu Pop: Hello, everybody. I'm Olimpiu Pop, InfoQ editor, and I have in front of me Adam Bien, one of the veterans of Java who always wanted to have fun and challenge the status quo, maybe when we were discussing enterprise Java more than a decade ago. I'll not add more to that, but I want to see what Adam's impression is about the way things have changed in the meantime and how AI can or cannot work with Java. But without further ado, Adam, please introduce yourself.
Adam Bien: Here you said the veteran. The truth is, in my first projects, I got the criticism that it is impossible to be dead young and an architect. So if someone calls me a senior or a veteran, and this is not true, I'm still a junior. Age is just a number, and I'm 30. So I've been a freelancer since 1997. My first conference as a speaker was in the year, maybe in '99 or 2000. And I did a lot back then, of course, with e-commerce and web. And do you know James, Java Apache Mail Enterprise Server? You remember something like this? I ran James on my server for mailing because I owned a new Java back then. So I used my own May server, and it was James, Java. It was Postfix and something else, and I couldn't understand how they are operating. So I use Java. And because of that, I became aware that I'm doing this, and I was asked for my first talk at a conference.
Since then, I've been speaking at conferences, but I'm not considering myself as a speaker. I actually have no strategy. If I'm invited to a conference, what you will hear me speaking about is what I do in projects, so I don't have to prepare a lot. This is about me. And I spend all the time on Java, and I'm coding more than ever with Java right now.
Zero-Dependency Java: Why Boring Standards Outlive Trendy Frameworks [02:23]
Olimpiu Pop: Thank you. I will just put a bit of controversy about how I see you and a couple of things I like about you. So as you mentioned, you're very flexible and very adaptable and you take on challenges. And for that, I remember your first presentation more than a decade ago when you came and talked in Cluj, Transylvania. And obviously, everybody, when we discussed Transylvania, we were talking about Dracula. And at that point, I think there were more than 100 people in the room, maybe even more, around 200. And everybody told you that's so Hollywood and will not catch anybody in Transylvania about that.
Adam Bien: Someone stood up and said, "We hate Dracula". And I was not prepared for that, actually.
Olimpiu Pop: And you're very flexible, and you just adapted, and you transformed everything into a distillery and something that goes with moonshine. That's people in Transylvania, they're very proud of it because it's like this hard liquor, vodka-like alcohol that everybody prepares at home. But on the other side, when everybody was going with the stream, with spring and everything, you were one of the partisans going against it. So for me, it's a very nice balance. As you said, you're very flexible on one hand and adapting quite quickly, but where you see that there is no need, you're just sticking to your ground, and you're a hardliner, maybe, if you allow me to say so. Looking at that, what you were preaching even at that point was to use as few dependencies as possible when building something, especially in Java. And at that point we were discussing the Java enterprise applications.
So JEE, now it's Jakarta EE and now it's mainstream, but there is a catch to it. Now you are one of the partisans of Quarkus. Why? Spring was not good, and why do you like Quarkus?
Porting Old Java EE to Quarkus Was Painless [04:05]
Adam Bien: The truth is, at the very beginning, I became a freelancer by accident. I really like Java for other reasons. And back then, there were too many application servers which were not compatible with each other. And I saw that it's possible for so many consultants to be out there, switching between application servers. For me, it is hard to understand one. And then J2E happened. And for me, it was the huge simplicity: okay, this spec is orders of magnitude simpler than all the application servers combined, because it was an abstraction. So I like J2EE and Jakarta and so forth. And most of my clients back then ran application servers, and then they started installing Spring on those servers. And this is what I didn't understand.
So okay, why are you doing this? I would run Spring on Tomcat. I get it. I will buy support from Interface 21 or Spring. They were renamed to SpringSource. But running Spring inside the Websphere is crazy. I don't get it. So we have two forms of dependency injection that can justify such an architecture. And the second thing I never got about Spring in the early days: many projects, and there was more XML than Java. And I didn't like XML at all. There were deployment descriptors in J2E as well, and I didn't like them. So I tried to generate a deployment descriptor, but it seemed to me like the Spring community is proud of the fact that they can define everything in XML. But my observation was that back then, they defined it once, and it never changed. I'm not sure why they are doing this.
I was then influenced by Ruby on Rails and the convention of configuration. And since Java EE5, this was the early days. Since Java EE5, we have had this in Java, and I was really happy. So this is like it worked for me. If I showed this to someone, everyone would like that. My workshops keep getting bigger, and there are more and more fangirls and fanboys of this approach. But if you talk publicly about J2EE, everyone associated with Websphere, which puts in two hours, which I never use. I never used WebSphere. And for me, it felt like a secret weapon. I've been involved in many very successful projects. And I think the main success is because we did this convention-over-configuration and fast startup times. It is almost like Quarkus right now, 20 years ago. And now Quarkus is similar. The fact is, Quarkus is the fastest runtime right now. Most of my projects are serverless. Java is 35 times faster than Python, Ruby, no JS, or whatever, and Quarkus starts the fastest.
My clients are saving money with Quarkus. So easy is it. So we could use something different, but the question is, why? What also amazes me is that, if this is true, the entire discussion of business-first and agile adapter requirements means everyone should run on MicroNote or Quarkus if they're using Java. So if this were true and if fashion is in place, then I don't care. Okay, then pick whatever you like. But I even tried to replace Quarkus with playing Java, actually. If Java 25 is enough, why do we need Quarkus? Not like I'm a Quarkus Fanboy, but I think right now Quarkus is the most logical choice.
Olimpiu Pop: I covered Quarkus for quite some time, and I spoke with Max and some of the team. And what I really liked, particularly in the discussion I had with Max, was that he wasn't afraid to be inspired by other ecosystems like Python and JavaScript, and to take what was actually useful from them. And leaving aside all the other politics and other aspects of these ecosystems, one of the best things was the fast feedback loop. And in my opinion, that's one of the strong points that Quarkus comes with. The ability of having tests run while you're typing and have a very, very fast feedback and also the versatility of being able to run it natively when needed, but also to run it on the GVM, and that's quite useful.
Boundary-Control-Entity: A 1990s Pattern Retrofitted for the LLM Era [08:19]
So I don't understand the movement towards this. And well, this is technology, but there is one other aspect that your last presentation inclusion, that was in September last year, again with Java users group, you coded and you coded a lot, you usually code it fast, but this time you had some help from AI almighty or however it's the proper naming, but you brought to light a pattern, or I think it's a pattern, right? BCE, boundary control entity. And that's another classic or old pattern. Why is it enough?
Adam Bien: Yes, we should do another session inclusion. So you are the Java user group, so you just have to know how to ping me, and I will speak again
Olimpiu Pop: I know the discussion has started already, Adam. It's something that I'm promising myself, and I'm promising you for more than a decade now, we should not have a battle of the frameworks. I know that you like to have fun while coding and you like challenges, so I'll try to put that together.
Adam Bien: Yes. Okay, cool. So the BCE, similar story, boundary control entity, it's called 1992, Ivar Jacobson, several books. This is a different background. I was always a singleton, a freelancer, and I was hired by larger companies through sometimes unusual ways because it is not always possible to hire a singleton. And I had to justify my choices against huge consulting companies. What I observe is that everyone has a point of view and there's lots of discussion, and in some companies it is still the case that they use external service or software suppliers or service providers to implement their stack. And what happens then is that the service providers are not implementing according to a standard or whatever; everyone does something different. And back then, it was even worse because some of them used code generators, others just used whatever. So it meant for my clients, if Software Supplier A creates software, it is not supported by Software Supplier B, and they were stuck with different software. This was the first problem.
The next problem was endless discussions about naming. And I think I wrote my first enterprise book in German. It was, I guess, 2005 or three or something like that. And I use my own names, which I liked a lot. I remember there was behaviour, there was a facet, and it was a domain, something like this. And the problem was everyone said, "The idea is great, but why do you call it a facet and not an endpoint? Why don't you call it a domain and not data?" And I had started to discuss the names in endless emails, and I said, "I would never do this again". So, since then, what I learned 20 years ago is that if there is a standard, it is way easier to pick the standard, even if it's suboptimal, so boundary control entity.
A Clean BCE Structure Helps LLMs Generate Production-Ready Java Code [11:18]
The boundary is fine. I don't like control, I like entity. So I would like to replace the control, but it's too late. The standard is set in stone. So what I did then in projects is, if we start something, what is the absolute common base? The common base is Java. So if we pick Java, there's no discussion, Scala, Groovy, Closure, whatever. We just stick with Java. It is not enough. We can always pick something different, but this is the absolute minimal stuff that everybody knows Java. Then, how to reduce the dependency is because every framework back then was an additional discussion. I don't know whether you'll remember back then which projects we were picking for the logging framework. And I was sick of all the pointless discussion, and I said, "Okay, we don't take anything. We just use whatever Java provide us and stop. And if we need something, then we discuss". And what happened then, basically, the IBM shop had mostly Websphere.
Thankfully, I was not allowed to be involved in such projects. My Red Hat clients had JBoss, and I usually introduced GlassFish back then. And the GlassFish, back then, I was a huge fanboy. For one reason, it was the reference implementation. So if I found the buck, it was fixed quickly. This was a great relation to the team. Plus, my clients could buy support for GlassFish if they had to, without switching to GlassFish enterprises, the same bits, which was not true for JBoss. This is why I switched lots of JBoss projects to GlassFish back then. And GlassFish was the same discussion. It's like, if we already have GlassFish and you're paying money for GlassFish, why do we need external frameworks? Just stick to what we have. And what happened then? Zero dependencies. Most of my 20-year-old projects have no dependencies. Developers were happy we never had to upgrade or migrate.
And the cool story is that recently we migrated some of the old projects to Quarkus, and there was almost nothing to do. So my clients were more than happy. And another observation is that there were different departments which always experimented with modern frameworks, whatever, reactive or whatever they tried, everything died. So I cannot remember any instance where they experimented with something which was cooler than our boring choice, and it survived.
Olimpiu Pop: So the learning from it is that if you stay with the boring, you're building something for the future. Otherwise, you just have something like a sandbags project that is changing quickly, and then you'll have a lot of headaches in maintaining it on an operational cost that comes with it. And even though there is probably an asterisk to your affirmation about the old projects 20 years ago, moving to Quarkus is the same; probably you had to change the name.
Adam Bien: But my clients were always surprised by how easy it was. So in one project, we had two people, and I think three days, and it was done more or less. So it was mechanical work. It's not like, no, you need a plan. And this was more than that because this was a really old project with JNDI or whatever. So we have to now replace JNDI lookups with injection. So it was more than that because it was really old, but it still worked. And why did it work? Because patents back then were consistent. Patents now are consistent. There were no external dependencies. There was more or less mechanical conversion without AI, even in this particular project. We just did it manually.
And maybe if I may, what I did in Cluj, I also introduced BCE and the upcoming conferences, I would talk about it. What I found by accident is that AI really loves BCEE. So LLMs know BCE by heart because it's that old, and they are really efficient at building BCE structured projects, which is good for me because my old projects live even longer.
Olimpiu Pop: But more than that, about BCE, I think it's something specific to Java. It's a secret source, let's call it, that is easier for LLMs to generate proper Java code. And that's, if I remember correctly from our previous discussions, the way Java works in adding new features in the language itself; it's a very democratic process. It's open, and then you have a specification that's very well documented, and then you have multiple implementations. Whether they are referenced or not, it's easy to do. How do you think the way Java is built helps with being easy to generate code?
Adam Bien: There are some studies from Tencent, and even a plain study, if you just pick Java without anything, just plain Java SE, just the language. As I remember, it's like the efficiency of LLMs in generating clean Java code is by 80% compared I think 60% Python and 60 with no JS. So even if you ask an LLM to generate Java CLI applications without a framework, that is already better. I apply a trick. So this was an accident, but how I always worked with JavaEE, I ignored GlassFish, and I ignored Quarkus. I rarely read the tutorials from GlassFish or Quarkus. What I did instead was read the spec, and not really read, rather than scan the spec or search actively in the spec. I usually had a folder back then with all the Java A specs downloaded there and a full text search. And if I had something to know, I found that.
And if the server behaved differently, I opened the book. So this was my approach back then. So okay, this is the spec, no discussion. Would you like to be J2E certified or not? This was my hardline. If you would like to be J2E certified or a Java E or Jakarta EE, then do so and read this spec. And then I thought, because all these specs were publicly available forever, LLMs have to know it. And I was stunned by how well they know it. And then what I did, this was one year ago, was start with it in my projects. I grounded the LLMs against the spec, and so far zero hallucinations. So I cannot remember any case where they invented their own non-existing annotation or whatever. So they know JaxRes and so forth by heart. And the cool story is why it works so well, because it was backwards compatible.
Because these specs were backwards compatible, whatever version they picked, it was always the same story. This works so well that in most projects I have a skill or rules, but this is maybe 100 lines of code, 100 rules, but most of the rules like don't generate too many unit tests, don't write stupid Java doc, never generate as a setup. These are the rules, not what exactly to do. And you get code, which looks like my code. So this is amazing how well it works. And in all my projects, we're doing this right now. And what I like about the fact is the LLM generates the same code as I would generate back then, including, of course, different namespaces, whatever, but it's the same ideas and principles.
From One Giant Cloud.md to Multiple Focused Skills [18:08]
Olimpiu Pop: So our discussion was last September, so Autumn 2025. And at that point, the trend was to use a cloud.md file or whatever. It was the agent.md file. But then we spoke after a couple of months, and what you said is that you started cleaning up a bit. How did your setup change? Did you start using skills? Because that's the new thing we are discussing now about context engineering, but my feeling is that agents are getting more and more powerful, and you have to even make your setup even leaner. What's your impression?
Adam Bien: What I did in Kluge, back then, I presented before it was released, Kiro. So this was, I think, in the week in Kiro is like Vision Studio Code Fork with spectrum development, I remember. And I was shown Claude Code, and this is why I didn't want to release that because I had one CloudMD file and there were like 500 rules and they were everything I did. So it was from CSS to Java to AWS Serverless, everything was there. And I said before that I would like to clean it up, which actually happened. So I think I released AIRails.dev at the beginning of this year. This is the website. And these are the skills. So basically, what I had in Cluj is reflected right now in individual skills. And interestingly, this is the AI implementation of BCE design of the ideas.
But to answer your question precisely, I have lots of skills, and I have, for instance, one skill for a Java CLI application. I have one skill for unit testing. I have one skill for microprofile server applications. This is what actually would be compatible with Cluj. In one of my sessions, what I did was use the old Cluj project. I still found it with the distillery, used the skill to migrate it to Quarkus, and it worked. The idea of this is to be a little bit faster, but create perfect code. And the reason is that the most asked question in my sessions and workshops is how well LLMs are working in huge projects. So this is the pain point. Okay, this is in my greenfield, hello world, it works, but what about my project? And my finding is that the better the structure and the cleaner the code, the better it works in huge projects.
Olimpiu Pop: As you said, probably a lot of the Java code is big, bloated, and full of bad ideas. So how would you approach this kind of project, especially during this period when it's the, let's call it the token words, when everybody has to have their eyes on the fuel, and that's the token? Because obviously, the context windows are getting larger and larger, but that means more chatty APA calls to the LLMs.
Reducing LLM Inference Costs by 80% with BCE and Standard APIs [21:04]
Adam Bien: In yesterday's workshop, we found something interesting with BCEE, Java standards, micro profile, and no dependencies. This is maybe exaggerated, but Claude Opus, we did a little research. Claude Opus found that we can save 88% of inference costs with this architecture. So I think it's too optimistic, but I still remember 88, it was yesterday. I would say even half is good or 30%. So how come? If you would check out one of my BCE projects and fire up Claude Code, and let's say distillery inclusion. I had for sure distillery one BCE. Okay, what is BCEE? We should explain, maybe, what BCE is. BCE is, I think, the simplest possible architecture. The basic idea is in the root folder, only Java packages, web component name spaces or whatever is allowed. And every package has to have a business name. We call it a component. So, a distillery for Dracula, we had maybe ghosts and blood or whatever, so this would be this.
And every component always has the same structure. If there is an external API boundary, some logic control, and a data entity, that's all. So, the boundary control entity we can explain to a developer. This is why developers like it, actually, in five minutes. So now, if you do distillery and we then ask the LLM to add another liquor, what I remember is Horinka, Palinka, and Zuica, right? This is the Romanian liquor. So from then, because I had to write it a lot, let's say we have a fourth one, I'll call it free beer. This is why I'm up-to-date. And then you will see that the Claude Code PIXA tool searches at the top level for the right package and stays within that package. And because Claude Code or Claude Code doesn't matter, LLMs like Claude Opus 46 or DevStrial from Mistro, for instance, or OpenAPI ChatGPT, they were trained on BCE and were trained on a microprofile server and so forth. They know the boundary has to be JaxRS. They know the entity has to be JPA, and they know what JPA is; they know what JDBCE is.
So with a three-word prompt, I can extend an existing component, which wouldn't be possible without standards, because if you don't have such standards, you have to put it in context. And with standard Java and BCEE, the context can be almost empty because the LLMs know what I mean.
Olimpiu Pop: Okay. So, to just make sure that I got it correctly, the BCE pattern, you're just looking at, let's say a package from the business point of view, and you have all the needed points there, even though you might have some, let's say, redundancy, some code application, we don't touch it as we used to do previously. When you split it on the business side, and then you have these helpers, and so on and so forth, the BCE keeps everything contained. Maybe something like a module, it will be the hip name currently, where you have modules, where you have everything integrated, and then each structure has the same. So, probably accounting is one, and then HR another, and so on and so forth. And that allows you to just copy the same structure easily, and even when needed, you don't need to just think about, okay, dry principles and so on and so forth to just make a lot of fuss about it and make even your worst into copying that functionality.
Adam Bien: Okay.
Olimpiu Pop: When should we think about extracting functionality that can be used in multiple places? Because I know there is a huge debate about it. There was the dry that was people pushing around, and then it's the other one that says you should start extracting it when it's used more than a couple of times. What's the golden number for you?
Adam Bien: Without BCE, just as a rule of thumb, what I do is one copy is okay, and before I copy a third time, I will try to abstract or whatever. I will copy first and see how it goes. At the start of the project, there's a high chance you will delete everything anyway. So common misconceptions. So if you need project audits or logging to meet your SLOs or whatever, then I would consider logging as a valid name for a component. So I will put a login component with all the logging stuff because it's actually required by the business. And if I've shared components, it's not a problem that multiple components access the logic of a single component. Maybe back then, I always picked the example like Amazon orders, and the tax maybe calculations, or the tax calculator is something internal.
And for that, it would be taxes with control and entity, but not boundary, because there is no reason to expose tax calculation, counter-specific text calculation for AWS, or I'm sorry, for Amazon customers, bookstore customers. And if you have something like, let's say, a string utility, this is a big one, like is empty, is not empty and so forth, it's nothing against. Some projects are very text-heavy. So in this particular case, I will create a BCE component called text or text manipulation or whatever. Just be specific about what insight is. And what is actually forbidden in BCE are names like Util Common Foundation, because they are too generic. And what I saw in projects is that you will find everything there. The best example is Java Util, right? The Java Util package has everything from data to functions to collections. I think maintainability means it is like one package, one business meaning, and you open it without surprises, but the name has to be as specific as possible.
Rethinking Metrics, Logs, and Dashboards for AI Native [26:56]
Olimpiu Pop: Thank you. And because you mentioned, or I think you mentioned, it's about observability. How do you feel about it? Definitely it's useful, but that field was a lot of rollercoaster rides. What's the simplest one from your point of view?
Adam Bien: My observation is that open telemetry wins right now. This is the most successful project or one of the most popular projects, even more unpopular than Kubernetes itself. This is a CNCF project, hotel, and especially in the age of agents, they have to be monitored, and open telemetry will be even more important. And recently I added open telemetry to Quarkus because of the demand of my clients, but it is the right choice. It is just standard. And what also shifts a bit is that back then, we had metrics because storage was expensive. But if you have good logs and AI, the question is, if you write nice logs, maybe you don't need to do many metrics. What can happen is this: the question is, can I use LLMs or not? Because if something goes wrong, you could ask LLM to analyse slices of your log, and you get good answers. This is my take, and the SLOs service level objectives have to be defined more or less by business.
But regarding observability, what I would at least like to have is a simple counter which counts successful and failed use cases. How many books are all that, and how many orders failed, or even better, how many messages are received and how many poisoned messages occurred? And if I have this, it is already good because I can create an average or whatever and pass it to LLM and say, "Hey, watch this. And if something happens, anomaly detection, just notify me". So I think the counters are a must, and nice dashboards are nice to have, I would say.
Olimpiu Pop: Well, who doesn't like a nice picture, right? Is there anything else, Adam, that I should have asked you?
Adam Bien: Yes. How I developed the skills in BCEE, because what I did, I always asked an LLM was this a good idea? And of course it's a little bit dangerous to ask such a question, but the next question is why? And so far, LLMs were very happy with BCE. We're also very happy, no dependencies, zero dependencies. And what I stand for is, it was not a very popular opinion, my opinion, zero dependencies back then. So I'm developing everything with scratch, and now I'm just using what's in Java first and so forth. And now I'm the fashionable guy. So I'm fully entrenched now, which is the first time. So I think maybe the first time, not against the current, right? Because with Quarkus, Serverless, Native, Growl VM is cool. I would say zero dependency is very cool. And in one project, actually, we think about ISO security certification, and with my approach, we are basically done.
So, actually, there is almost nothing to do because we have no dependencies, just standards. We only have to map the requirement to the standard, and my client is happy. So they were also pleasantly surprised by how well it worked out.
Olimpiu Pop: So you're a bit confused now, when most of the people are listening to you, you don't have to do that many arguments.
Adam Bien: Or they say to me, "Why did we hire you? This is normal". So okay, cool. It was not in the last few years, but yes.
Improve the Sovereignty of Your Applications by Designing Them Cloud-Agnostic [30:17]
Olimpiu Pop: Okay. Last question on my side, and then we can wrap. You mentioned most of the trends. There is another one, and that's the sovereignty part. How do you feel about it? I know that you mentioned AWS a couple of times, and there are debates about it, meaning that even though Amazon and all the big three have boots on the ground, so they have data centres in Europe, they are still US companies, and that means that data can move across the ocean. But that's a topic for boardrooms, not for techies. How do you look at it? Because I know that Germany is quite sensible on privacy, and then probably came out to you. How do you ensure that you also provide your customers with the needed privacy in cloud perspectives?
Adam Bien: Yes. First, about the cloud. So, before the Corona pandemic, I was very happy with OpenShift and private clouds. And I was, in some companies, more or less forced by management to go to the cloud. And management couldn't justify saying, "Hey, we are more agile and scalable. This was the argument". And I say, "Okay, if we go to the cloud, we do it the right way, not just moving whatever we have on-premises to the cloud and hope it will be better because from my perspective, it will be only more expensive, but not better". So we factored everything to Serverless, for instance. So now we have the cloud. Now I would be happy just to move everything because with Quarkus, it's not a problem, and it runs. Actually, our AWS Lambda serverless they look like back then on-premises Docker containers. So it will not be a big deal at all to move it back.
More problematic is the network infrastructure and security permissions. This is your cloud dependency, not necessarily Java. And another dependency, which is even stronger, is the LLM. You should not forget, because right now we get lots of mandates and how to use the LLMs, but the Opus runs in the US, could run in Frankfurt, but it's still AWS or Asia Foundry or AJI or Google GCP Vertex, I think it's the name. So LLMs are almost a soft problem because I am able to run powerful models locally. I can develop if we talk about today locally. I don't do this because all my clients use remote models, but I think next year we will be that far that we get powerful models, open source models, which are good enough. And the cool story is that even the open source model, even more so, they know Java really well.
So I don't need the most powerful model because I don't care about esoteric frameworks, whatever. It is good enough to know the cut-off date two years ago, and I'm happy. I know German companies, which are great hosts, I wouldn't have a problem hosting there, but the fact is that AWS, for instance, we host whatever we have in Frankfurt, so there is not a lot of difference. But if the management says this is still somehow a US company, I'm not a manager, but if they say move it to our own cloud, I would be happy to do so because our applications run everywhere. We will spend more time with infrastructure coding. So what I'm a little bit concerned with is if we move back to a private data centre, their infrastructure code is not as good as Azure or AWS, but it's cool projects. Okay, management, if you like, give us a few months, and we will replicate whatever AWS or Azure has, and we will try to build infrastructure code on your cloud, right?
Run Things Locally Where Possible [33:36]
Olimpiu Pop: Well, to just be a bit of a contrarian, as I learned from you, currently, what I'm doing is just trying to find tools, good old-fashioned tools that I can use in my terminal rather than doing everything in the cloud, in the terminal with the LLM. For instance, now I'm playing a bit with transcriber, an open source project, because it's possible. So I want to see what the limits are rather than just going for LLM, keeping the so sharpened mainly.
Adam Bien: You are not a contrarian because we do the same. If you go to my GitHub account and you search for a project called Z, Z, and the Z is zero dependency seats. You will find maybe 20 to 30 terminal JavaScripts; Java 25 scripts would perform useful things. Then what you could like is Zsmith, also on my GitHub account. And what is this? It is a Java 25 zero dependency agent, and I use it. It runs in the terminal to automate things. For instance, our podcast was pre-processed with Smith. So what it does is it reads the transcript, finds links, and finds the guests. Of course, I'll read through it, but it's a great help because it finds more links than I would. The Smith and Tornado VM and GPU Lama. So I tested that. You will be able to run LLMs like DevStrill from the hugging face with a Tornado VM dependency.
This is a specific JDK with Java locally. And I see this also as the future because inference, this is the time of inference. So the company is spending time thinking about how to run LLMs, and with Java is an unknown process. We already have Java. So we put GPU Lama on it, and we can run it without Olama without installing, because I'm more concerned about, for instance, what we do in CIHCD. We will have to launch it, run it somehow with Java, is that easy? For me, it's really hard to imagine how to do it with Python, for instance. There are so many dependencies and the mess, and with Java is nothing.
Olimpiu Pop: Well, TornadoVM, that's a cool gimmick. I have to talk to them as well because they build some cool stuff there.
Adam Bien: Yes. And release TornadoVM4, which is way faster than the old one, and I think it's a kind of the future.
Olimpiu Pop: It seems like it because, in the end, the AI is getting more into operational space, and then, obviously, you don't do that much training anymore. So you're just going towards inference where you're just moving to other types of loads.
Adam Bien: And if you take a look at Uber, Amazon, most of the companies are running inference on Java, actually, use multi-threaded and run Python in production. You could do this, but I don't know whether it's a good idea if everything else is in Java, for instance.
Olimpiu Pop: Yes. Well, we're not going to debate that, not now. Okay. Thank you, Adam.
Mentioned: