Facilitating the spread of knowledge and innovation in professional software development

Contribute

### Topics

InfoQ Homepage Podcasts Tudor Gîrba on How Moldable Development Offers a Novel Way to Reason about Systems

# Tudor Gîrba on How Moldable Development Offers a Novel Way to Reason about Systems

Charles Humble discusses Moldable Development with its creator Tudor Gîrba.  They discuss Gîrba’s key insight—that developers spend more than half their time reading systems rather than writing them—and how this lead to the creation of a novel development approach, Moldable Development, and a corresponding IDE, Glamorous Toolkit, which has the potential to change both how we reason and make decisions about software systems.

### Key Takeaways

• Developers spend more than half their time reading code in order to understand what a given system does, but this activity is rarely discussed and thus hasn’t been optimized.
• Reading source code in an editor is a remarkably inefficient way of understanding what a system does; a better way might well be to build visualizations from the software itself. In effect this introduces another feedback loop, much like how agile and DevOps did.
• Because software is highly contextual, these visualizations require custom tools to be built for every development problem.  Although this sounds expensive, Gîrba argues that the efficiency gains more than offset the development costs.
• Gîrba and his team have developed Glamorous Toolkit as a way of exploring his ideas.   Glamorous Toolkit is written in Pharo, a Smalltalk dialect, but can be used with more common enterprise languages including Java and C#.
• They don’t see Glamorous Toolkit as the final answer, but rather as a way of exploring the problem domain and, hopefully, starting a conversation about this key part of software development.

## Transcript

Charles Humble: Hello, and welcome to the InfoQ Podcast. I'm Charles Humble, one of the co-hosts of the show and editor in chief at Cloud Native consultancy firm, Container Solutions. My guest this week is Tudor Gîrba. He describes himself as a software environmentalist and is CEO of feenk.com where he works to make the inside of systems explainable. Much of his work is embodied in Glamorous Toolkit, a novel development environment that enables something he refers to as moldable development. He gave a talk on his work at QCon Plus in November and it honestly blew me away. And so I was really keen to get him onto the podcast to explore his ideas some more. Tudor, welcome to the InfoQ Podcast.

Tudor Gîrba: Nice to meet you, Charles.

## What is the problem you are trying to solve with moldable development? [01:20]

Charles Humble: Thank you, likewise. So I thought a good place to start would be to try and explain what is the problem? What is it that you are trying to solve with moldable development?

Tudor Gîrba: The actual issue is figuring systems out. So developers alone spend the largest chunk of their time trying to figure the system out, to understand what to do next. And most of this time is actually spent in a rather implicit manner.

Charles Humble: Okay. So what do you mean by that?

Tudor Gîrba: I actually went around and I asked several thousand developers if they agree with this statement and they read code more than 50% of their time. And the vast majority do agree. Just like how there are quite a number of studies now going back about four decades, talking about the same thing. And then the next thing I ask them is, when was the last time you talked about how you read code? So not about the code that you read but about how you do the reading. And it turns out and that's not really a subject of conversation. And so when something is not a subject of conversation, it's not explicit. If it's not explicit, it has never been optimized. And we are talking here about the single largest expense we have in software development. So that's what we start from.

## How do you try and solve this problem? [02:30]

Charles Humble: And then given that starting point, how do you then try and solve that problem?

Tudor Gîrba: The way we do it is we look at the outcome. So why do people read code? Is because they want to change something. They want to affect the system in some form. There's a very specific intent behind this. It's not reading like a textbook. It's not like reading some novel. There's a very specific intent. Of course there are different scenarios in which people read for example, when you learn a new language, it's going to be completely different than when you're trying to go through your own system to fix a bug. We're talking about the letter part. So going through your own system and trying to do something with it. Now, from this perspective, if the end goal is decision making, it means that reading is just the way we extract information out of the data around the system and by data I mean everything, code, configurations, logs signals coming from the production system, data in the database, it's all data but reading happens to be the most manual possible way in which we can extract information or data.

So the alternative is of course, to build tools for it. Now, the only thing here is that software is also highly contextual. That is we can't predict from the outside, what problems people have inside the softer system. We can predict classes of problems but we can't predict specific problems. And it's a key perspective to take into account because if we can't predict specific problems people have. So for example, at the QCon Plus talk, I showed problems about dependencies. Everybody has some sort of questions about dependencies but there I was talking about some feature toggles. So if you define a feature toggle in some place, some part of the code and then use it in some other part of the code, all of a sudden you have actually dependency there, without that dependency being visible at the level of the language. If I just look at the code, it's hard to see the dependency but if I understand what the framework behind this is, then all of a sudden I have a dependency and there are many kinds of such details in a software system.

There are many, many layers of somewhat, some people can call them languages but they are just various kinds of semantics that if you don't see you're missing the point. So they to reason about these kinds of details, you have to understand the context of that system because chances are that the problems people have, the specific problems are unique to that system. So in this situation, you can't build clicking tools that are generic because a clicking tool bakes the problem into the button. So I come from a line of research where we were focused on finding the interesting buttons on which developers will click and will make their life immediately better. Then it turns out that there is no such button that we can build but we can provide developers with the building blocks so that they can build their own buttons. And that's the thing, that's what we think the essence of this part of the work is. We think that we need tools but those tools have to be contextual. For tools to be contextual you have to build them after you know the context.

## So it's another way of getting a feedback leap into the system that then allows you to reason about it from a different perspective? [05:30]

Charles Humble: Right? And that's a huge inversion of control from the way we typically think about these things. It's a bit like you can't go and just grab a generic unit test and apply a unit test to your code and imagine the unit test is written for the code, it's not like a magical thing you can go and pulse, it's the same sort of idea. And in the same way that unit tests, if you have enough of them, they give you some confidence that, well I've changed this thing and my unit test still passed, so hopefully I haven't broken another thing. It's another way of getting a feedback leap into the system that then allows you to reason about it from a different way. Is that a reasonable summary of what you're getting at?

Tudor Gîrba: Absolutely, absolutely. Exactly. A unit test is an analysis basically and you give it all sorts of data and at the end of it, you get the green, red output. So that's an analysis and nobody downloads those unit tests from the web, as you said. But everybody, for example, if I take another area where also people apply analysis on is for example, static analysis. Pretty much the way people use static analysis is that people download it from the web and then the same thing is being applied on a thousand systems, which been my definition, those rules have to capture what is common between those systems and not what is specific and hence they miss the value.

So that's why it's not seldom that we see systems in which we have zero failing tests and 10,000 warnings. It's not because people don't care, it's because the tests capture value while the generic static analysis, they capture somebody else's value but that might or might not be interesting for you. And that's a problem. So the way to change this is we take exactly the same flow that you have when you're doing, for example, test driven development but you're applying to every single development facet and problem.

Charles Humble: I think that's really interesting because in retirement I've been involved in software. The big thing that's happened really is that we've added a number of feedback loops. So we added one, you could argue that Agile was a way of getting a feedback loop between the development team and the business. So if we get them talking to each other and find some common way of exploring a thing, then we start building software the business actually once. And when I started industry, that was a very, very radical idea. The thought that business people and developers might talk to each other was utterly bizarre.

And now it would be quite weird if you weren't doing that. I mean, people do but it was relatively unusual. And then we had the whole DevOps thing where you're basically accelerating the speed at which you can get feedback leaps as a way of optimizing how quickly you can get code into production. So you have a lot of automation. And again, it's another feedback loop. And what you are talking about is effectively introducing another feedback loop, which is in the business of understanding, what does this system actually do? Is that right?

Tudor Gîrba: That is a very nice way of putting it. It absolutely is right. We are introducing a feedback loop. So if you go back to this Agile thing where business and technologies were talking to one another, indeed just that thing alone created this huge new opportunity of creating value. But the conversations even there are not so much about how the system is built inside because there's a clear between those two things. The conversation is at the interface level. It's just what the system does but then with DevOps, it's interesting because that feedback loop is much more of a technical nature, it's just about, how do I deploy that system? How do I put it in production? How do I go from a commit to the deployment? It's not even the... It's the geekiest problem you can possibly have in the deployment.

You're just talking about the pipelines, pipelines is a term becomes cool with DevOps, which is fascinating. And that technical thing, that technical thing enabled a whole new level of value creation. And that's crazy because it's far away from business or apparently it's far away from business. And yet it enables a whole new way of business. Okay, what we're talking about is, we are saying, take that black box of a system and let's have a conversation about any piece of it inside of that black box and make it understandable to different parties and those parties can be technical or maybe they're business or non-technical or even users. For example, if you go into data rights, it's not enough to understand what just happened. So data is very important to understand what the algorithms do.

So that's exactly what moldable development does. And this is a whole new feedback loop and it should be actually reason about it like that because every new feedback loop opens up a space that is pretty much unpredictable before you enter that space. You don't see until you actually live in that space for a while. So for example, if you look at the kinds of businesses and business models that are possible today, many of them were actually not quite predictable 12 to 15 years ago because before we had the actual skills and used them at a reasonable scale, the same thing we think is going to happen with more of development.

## What is the representation that you use that increases the bandwidth in terms of absorbing how software works? [10:05]

Charles Humble: So there's one other thing that you said, which I want to unpack a little bit. Because again, I just think it's really interesting. So the point about moldable development is that the way that we typically interpret our systems is we do it through reading source code, basically. And your point about reading source code as source code is it's quite an inefficient way of understanding what a system does. So what do you do instead? What's the representation that you use that increases the bandwidth in terms of absorbing how software works?

Tudor Gîrba: First of all, that's absolutely correct. When we manually go through a system, we accept the fact that we cannot go through the whole system. So even if you take a small system, I don't know, a quarter of a million lines of code, an overall system like that, let's say you read really fast one line in two seconds, it takes a whole person month to read the whole thing. This is just to read it. If you do this at eight hours per day. So by the time you finish that system has changed many times and you have no idea how. So reading, just is simply not a possibility, it's not a scalable solution. So when you, we see people drawing, for example, a summary based on what they have read or based on what other peoples have read, what we are essentially saying is that, that summary, that drawing will document what people believe the system to be, not what the system actually is.

And that is what we see in practice as being the single largest challenge people have with their existing systems. People don't lack the ability to fix problems. They lack the ability to see problems. So what do we do instead, we treat every single development problem like a data problem. How do you deal with the data problem? You start from the problem, and then you figure out what kind of representation would make this understandable or more approachable. And you got to build that one and that's it, that's the whole thing.

It just so happens that it changes. It has lots of implications. And we are doing this from the smallest possible problem, like a tiny bug to how do we reason about a, I don't know, 20 million lines of code Legacy system that has to be moved in some way. And even if you would think, well, what works for a bug at small scale is not something that is useful at the large scale. That's one of the things that we learned during our work, is that, that's actually the same space and there is a systematic way to approach both of these problems and through the same skills and tools, basically.

## Do you all converge on the same understanding of the system or do you still have a diversity of opinions on how the thing works? [12:30]

Charles Humble: Right. Yes. Certainly in my experience of working with Legacy systems is that you gradually build up or at least I do, I end up with a mental model of how a system works, which is quite visual. I know that component and that component interact and I can see the message flows or whatever it is between the different components, assuming it's distributed in some way. But as you say, it's a useful mental model but it isn't, all models are wrong, some are useful. It's not an accurate model but it's hopefully good enough for my purposes. If you apply this way of working, does everyone end up with the same mental model? Do you all converge on the same understanding of the system or do you still have a diversity of opinions on how the thing works?

Tudor Gîrba; That's a very nice question. First of all, the pictures that we create about the system. So we create pictures and we typically use maybe some sort of a query again and the system but the query is not just a, here's a language query or a source code query but it can be, take some information from the configuration, something from the source code, maybe the source code, maybe there are three different languages involved in the implementation of the system. And you want to find out how a data traces through the whole system for example. We build pictures obviously and those pictures, they summarize and they compress what we think is interesting about the problem that we have in the system. But those pictures are being created automatically through a custom tool. So we spend a little bit of time building a custom tool and then we're going to use that tool to produce the picture.

Sometimes the picture is used only for the duration of a single question. Sometimes it's being reused many times over. So when I was saying that when people are drawing a picture by hand, they summarize their belief about the system. It's simply because it's impossible to know the whole system. It works in the small. So we see lots of tutorials about how people work with a couple of screens. And that problem, you can deal with a couple of screens by just relying on memory and just reading brute force through it. But you simply cannot do it when you have lots of things that you don't know about. And that's the problem. That's the main issue that we are actually solving. So when we build these pictures, what we find is and that was a very reasonably surprising effect, so internally when we work, we send a lot of pictures over chat.

We use a chat as a communication tool and we find that in those pictures, they represent not how the system looks like from the outside but they represent, if I have a question, I'm going to send a picture with what my hypothesis about the problem is. And then very often we find ourselves that I send a picture and then somebody simply answers with a solution within minutes. There's no additional conversation, it's just this, here's a picture that communicates what I think and then somebody maybe sends a response. We noticed that it dramatically compresses communication costs. So when this happens back to your question about, is everybody on the same page? When this happens, we look at an organization as a set of distributed notes and of distributed actors. And then of course the key question here is when is the system in a synchronized state?

And often people rely on meetings, for example. Another solution is to rely on eventual consistency. So it doesn't so much matter when people are on the same page. What matters that when they make a decision, are on the same page and that's the key. And the other thing about it is to keep in mind here is that we are talking about new kinds of skills and the most important skill here is the skill of reading. So it's not like the system will be necessarily more explainable by itself, although it does gain those kinds of properties over time. But what we are enhancing first foremost is the ability of a single individual to answer whatever questions that individual might have about some unknown piece of data in the system.

So by doing that, you're going to simply increase the likelihood that they will be on the same page as a team afterwards. Our focus was, how do we affect a 15 minute effort to solve something in a system? And because we can now work fundamentally differently with the way we look at our system. So for example, the first thing we see when we look at the system is not an editor.

Charles Humble: Right. Yes. The name is really interesting there, isn't it? Because it's an editor which tells you that it's designed for a different purpose. It's not designed for reading code, it's designed for editing code. So we are using the wrong tool for the job.

Tudor Gîrba: And then of course, if the only thing that you're being presented with is a piece of text, then you'll feel the need to scroll through it and read it because these are the tools, these are affordabilities that the tools offer.

Charles Humble: Right. So what's the way to change that?

Tudor Gîrba: The way to change that is we change the nature of the tool. We are not talking about this idea of moldable development environment, which is essentially an environment whose main or core property is that it can change shape while you work with it. Maybe the closest approximation of it is if you think about the data science space, if I take a look at two different notebooks that addressed two different problems, they will simply look different, they revisually look different. And while if I take two screenshots of a development environment open on two different problems, even on two different systems, I will basically see the same thing. They will not be visually distinguishable and that's a missed opportunity. And that's basically what we are doing.

Charles Humble: I think it's so interesting because I mean, I've been involved in software development. I don't really program anymore but I've programmed professionally for 20 plus years. So that was my job for 20 or so years. And in all of that time, all of the work that was done to make the developer job easier was all about optimizing the coding bit. It was all about making the business of actually writing code faster. So you get lots of shortcuts in your IDE. There are lots of things that languages do increasingly that are implicit that actually trade off in a lot of cases, readability for speed of input. And I think it's one of those wonderful insights that feels really obvious when you say it but I've never heard anyone say, "Hang on a minute, are we optimizing for completely the wrong thing here?"

## Isn’t this a very expensive way to understand a code base? [18:30]

Charles Humble: And I think that we are. I can think of a couple of things that are bound to come up if you're trying to sell this in an organization. And actually they're very reminiscent of the kind of arguments we used to have about writing unit tests but I should ask them anyway. So for example, you are building a lot of effectively custom tools to understand a code base and isn't that a very expensive way of understanding the code base? Are they making an awful lot of things that I then throw away that are disposable? But surely that's a terrible use of developer time.

Tudor Gîrba: Thank you very much. Yeah. That's exactly the first question that people typically ask. And the answer is absolutely no. In fact, you're going to optimize that piece of work. There's a figuring the system out. We think we can optimize it by an order of magnitude by doing that. So the first argument here is costs, I like the economic argument. And as you were saying, we have optimized for so long for creating systems fast and faster to the point in which today, if you think about it, we are at the place where we are creating systems so the body of software grows non-linearly year over year but at the same time, we are unable to recycle all systems. So we are actually in a place where it's the way we are handling software is sustainable. We are behaving pretty much like the plastic industry. So we are focused only on creation but we don't know how to take the systems apart or we kind of know but we can't, even if we want to, people just fail doing it.

And the reason at the core of this problem is that before you can take the system apart, you first have to understand the parts and this right now, understanding the parts of the system relies on people reading some text and it's a fundamental problem because reading is kept at a constant speed. So you can't match a super linear growth through the constant recyclability function. So this is the moral argument. The way we are building software today is not sustainable. And we have to address this problem very soon, especially given that we are reshaping the whole world on top of it. And also that most likely a good love climate crisis solutions will be based on software. And our kids will only know a world of software and we have to create that world to be sustainable. So that's why understanding or investing in this ability of how do you figure the system out, is so important.

As I said, this was the moral argument but if you go back to the economic argument, today we have businesses that need to change because maybe they have a market opportunity. They have the opportunity on the market to scale 10X but now they have to do it. It's not enough to go and understand it. Maybe you have applied data science or to figured out, oh, look, there's this opportunity. And it is just great. And then people say, "Yeah, this business intelligence means data intelligence." And then all of a sudden you realize that you go into practice and you say, "Oh, I can't move my system. My system won't let itself be moved." So the software intelligence basically is missing there.

Charles Humble: And normally there's a window with those things. We've identified an opportunity but we need to be able to exploit that opportunity in a specific period of time, if that's like six months or a year or before our competitor does it typically or maybe some market conditions can change. And if you can't change your system quickly, you can't exploit those things.

Tudor Gîrba: Exactly. All of a sudden, this geeky thing of, how do I read code? Become is the single largest blocker for business development. So it actually has a high impact. Very much like how the pipeline, that's the other geeky thing in a software development, who cares? How do the bids move from the commit to deployment had all of that dramatic impact. Now we have another one. In fact, the way we fund our work is through consulting and we consult on problems. We work with two kinds of companies, either people that are already in some sort of a crisis. So they've tried all sorts of things and they don't know how to go from A to B at all. And then they say, "Well, let's give this thing a try." And then the good thing there is that we don't have to explain what we are doing. We're just providing the miracle.

And the other kinds of companies are those that actually want to de competitive advantage and that's the thing. We think that this new feedback loop is going to be a major source of competitive advantage for the next decade. So we talked here about the two arguments, the economic argument. So right now, just think of it, when people allocate budgets, they inadvertently allocate 50% or more of that budget on a single activity nobody talks about, it's just mind boggling. So definitely we want people to start talking about it.

Charles Humble: It's crazy actually, isn't it, it really is crazy.

Tudor Gîrba: If you just do a little bit of arithmetic here, so there are 50 plus million accounts on GitHub. Let's say 50 million developers today. Let's say that each of them costs $10,000 per year, which is a ridiculously low estimate. So 50% of that means$250 billion every single year allocated to a single activity nobody wants to optimize. We think that we can do much better with that so it's a significant thing and that's just going to grow. And so these are the economic arguments and then we have the moral argument.

That's something we call software environmentalism. So we need to get software to be in a sustainable state. We need to be able to recycle our systems, at least at the same speed that we to have created them. Maybe those, they don't resonate. But then there's a third argument. And the third argument is about literally fun. When I go around and ask people, do you love working with Legacy systems? I rarely get a yes answer. And most of the time, I get the shiver, like Ooh. And I think that's nuts because Legacy is supposed to be associated with something that is useful and beautiful and valuable.

## Are there use cases beyond code exploration for moldable development that we haven't talked about? [23:43]

Charles Humble: And actually I would even go further than that. I think that something that's happened in the course of time that I've been involved in the industry, is that in an odd way, I feel like the fun has rather gone out of it. It's not, I mean, this may just be my personal bias because as I said, I stopped programming and I stopped programming because I stopped enjoying it, frankly. But I'm still really interested in the industry. And I honestly think a lot of better companies have teams who build tools to make developers more efficient and to try and make the developer experience better.

The paved path, the DevX model that companies like. Netflix and so on have, is an attempt to try, I think, to make programming better for people. But I think it's just a really important thing to think about, how do we make this joyous? How do we make this fun? How do we put a smile on people's faces again so that they want to do the work? Now we've talked a lot about code exploration as a major use case for moldable development but are there other use cases for this approach that we haven't talked about?

Tudor Gîrba: Oh, lots, lots. So over the last decade, the way we validated this, I coined this idea about 12 years ago and ever since I left the academia afterward and my goal was to validate the hypothesis that indeed there is a way to look at systems in a different way and thereby enhance productivity and enjoyment. And the way we are validating this is by first of all, we create a set of tools but we use those tools. We put ourselves in ridiculous situations, situations that we've never been into, new of classes of problems. And the question here is, where does this not apply? And so we've been doing this now for 12 years and we haven't found that place yet. So use cases, for example, when people are looking into observability is a thing today. And then there are two sides to it and there's, how do you get the data?

And then what do you do with the data once you have it? And a lot of it, this move from a monitoring to observability. If you look closely at it is about this flexibility of getting the decision making of what kind of viewer is interesting as close as possible to the moment when you actually have the problem. So observability is a sub case of this one. So we've been doing some of those. But there are other kinds of use cases, for example, depicting a domain, a business domain. So if you look at domain driven design, people talk about the ubiquitous language but that ubiquitous language typically lives on a whiteboard. So it's when people draw things that enact that language, what we think is that we should be able to open the development environment and it's the system that should show us whatever have drawn on the whiteboard.

So it's fine to draw on a whiteboard before I know what I want to have. So a plan belongs on a whiteboard. The current system doesn't belong on a whiteboard, whatever I have drawn as a plan or as an idea or as a picture, is a mental model, as you were saying before and whatever I depicted as a mental model, we want to make it the responsibility of the who carry it forward. Literally you inspect an object and you show it to a business person like an object in the debugger. You stop your system, you take an object out in the development environment, you show that one to the business person and the business person understands what you're showing. That's the level of conversation we're talking about and it's possible we can enhance these kinds of conversations many times over to what even the shortest feedback loop that we have today.

And another area that was really surprising for us was reverse engineering data but in depth. So what do I mean by that? Imagine you take your data from Facebook, you download your personal data and there's lots of data in there. And so you can have it. And it's JSON, there are set of JSONs there, but you don't really know what's in there. And there's a wealth of information that you can extract from that about you and also about how Facebook is exploiting you but it's hidden. The data is there is in plain sight but the consequences of it are hidden until you get to explore those. So we've applied exactly the same tools and techniques and we're extracting information from those kinds of sources. So going in depth, doing in depth reverse engineering of what is otherwise named as data. And we are not talking here about machine learning.

We're not talking about magic solutions. We're talking about a simple systematic approach, which are pretty much applying the scientific method. Here's a hypothesis to answer this question, I'm going to use a tool but I don't have the tool. So I first build the tool. Then I get the result and then I figure, oh, do I trust it? Do I need more? If yes, and I'm repeating it. If no, then I'm just going to do it, I'm going to act on it. That's the flow, that's it. And we are just applying it over and over again. And there are so many places to apply it to. And the reason why this is interesting and the reason we've done it is because this thing will work at scale. If we can find something that people can invest in systematically. A set of skills and a set of tools that people can invest in once and then use many, many times over, that is what we were looking for.

And I'm quite confident that we now have a strong basis to start to have that conversation. The conversation here is what I want to emphasize. So we've done the work and people look at our demos and they're exciting. And of course they're exciting because the comparison is so boring. But at the end of the day, what we need to do is we need to start this global conversation about how do we take our systems apart? How do we figure our systems out? Because what we have learned from the other feedback loops that we had before, once we start the conversation, a decade down the line, we're in a completely different world. And so that's what we want to achieve. We want to start that conversation. And all our work is geared towards showing that there is a conversation to be had.

## Where can listeners find out more? [29:10]

Charles Humble: And if listeners want to learn more about these ideas and explore them for themselves and then perhaps get involved in that conversation, where's the best place for them to start.

Tudor Gîrba: We've created this concrete piece of technology to show how this one works in practice. And that's the set of tools that we are using as well for our own work. You can go to gtoolkit.com. So the environment is called Glamorous Toolkit. It embodies this idea of moldable developments. It's a moldable development environment. And the main purpose of it is to empower developers to create custom tools for their own problems, ideally for every single problem. And the other thing about it is that, it's the first major case study of employing moldable development. So you can use Glamorous Toolkit to reassemble Glamorous Toolkit, while you work on Glamorous Toolkit tool. So it's not just a piece of tool but it's also an example of how far do we go to make systems explainable. So just there is the piece of information about it is where you start, when you open Glamorous Toolkit, you'll get a couple of thousand plugins already existing, a couple of thousand customizations already running in the core tool.

Tudor Gîrba: If you think about it, take any development environment you have and out of the box, it comes with no plugins because that's the core. But the thing is that even on the core, you need to have many different perspectives. So itself, it comes with all that amount of plugins just to show how far does it go. And in fact, it's not just to show that one, we've built those kinds of views because it was economically viable for us. And we keep saying that people say, "Oh yeah, but maybe you created those for marketing reasons." Which is, yeah, it's a valid perspective, but you can create a hundred of those views for marketing reasons but you won't create a thousand.

## Glamorous Toolkit is written in Pharo, which is a Smalltalk dialect. Can it be only applied to Pharo or can I use it for other languages as well? [30:50]

Charles Humble: Glamorous Toolkit is written in Pharo, which is a Smalltalk dialect. Can it be only applied to Pharo or can I use it for other languages as well?

Tudor Gîrba: Indeed. Yeah. So Glamorous Toolkit itself is built using Pharo. So it itself is a small talk system but we can apply it to many different other systems and we are applying it to many different other systems. So for example, Java, C#, Python, JavaScript but the other thing it has and we haven't talked about this yet but not only does it give a different perspective on what development environments could be but it also unifies development environments with knowledge management and notebooks. So it comes with the technology that is built in, in which there's no difference, you are always in a knowledge management environment, you can build your Wiki and you can have your links and intertwine them with code and have all sorts of complicated narratives or as deep as a narrative as you want. And even more interesting afterwards is how you can then use the tool itself to visualize your own knowledge base because it's a similar problem, it's not a different problem.

So that's why we get Glamorous Toolkit look, it is usable and we are applying it actively on many different inputs or the input sources. And when we don't have that, it also has the support for example, if you have a new language that your system is written in some variant of COBOL, you're going to want to build a parser but before you're going to build a parser you want to go and reverse engineer the language maybe. We've had a case like that in a major corporation. So literally one of their core systems was built in a language that they couldn't describe. They didn't know what the language was.

Charles Humble: It was like a custom version of COBOL that someone had built.

Tudor Gîrba: Yeah. Yes, exactly.

Charles Humble: Wow.

Tudor Gîrba: You'd have the core of the business going through that system or as a major piece in that whole organization and people didn't know exactly what the language was.

Charles Humble: It's kind of extraordinary, isn't it? But even just old languages, I mean, nevermind a custom version of it but even just finding people that can read and understand COBOL now is not that straightforward.

Tudor Gîrba: But the thing is that you are always going to be in some sort of place, that people say, okay, whenever people tell you, this is a Java system, it's never a Java system. There's also annotations and there's all these XMLs and then there are all the YAMLs that are here and there. And if you don't take all of those into account and you build another interpretation of that whole system that understands also the frameworks and whatever they are doing with all the annotations, if you're not going to build that, you're going to miss the whole point. And so you are always in a state in which you have to accommodate this idea that I actually don't know what's in front of me but that's not a scary problem. It's always solvable problem. And so for this, we need a systematic way to approach the problem through the whole stack, going from the lowest level of the possible problems.

Like I have an error coming through in a log file and I have no idea what is in there or I need to find it somewhere and make sense of that, all the way to, how do I reason about the architecture of a 50 million lines of code system? Those are not distinct problems. The solutions will be distinct but fundamentally the skills that you need in place for those are not that different. So that's what we are basically showing with Glamorous Toolkit. A systematic environment in which we are just doing many, many different kinds of things. Of course, that's also difficult to digest and it's difficult to get into.

## How do you disseminate this? [33:53]

Charles Humble: And that's presumably the next challenge that you have then is, how you disseminate this, how you get this out to people.

Tudor Gîrba: Disseminate all these learnings in a way that people can digest and entry from various different angles, from their own context and start learning and exploring the possibilities that we have there.

Charles Humble: And do you think of Glamorous Toolkit as being the final answer or is it more an experiment?

Tudor Gîrba: It's first and foremost, an exploration tool. It's our exploratory vehicle but it's also a productive at all. So you can go and take it and do stuff with it. But I think that there's a very large space that we have to explore as an industry and very convincing once we start exploring it, literally for real, at scale, we are going to build whole new solutions.

Charles Humble: That's wonderful. I think that's a brilliant place to wrap it up. Tudor, I could talk to you for hours on this. I think it's absolutely fascinating but I'm going to close the podcast now and just say thank you very much indeed for joining me this week on the InfoQ Podcast.

## Mentioned

Learn how to solve complex software engineering and leadership challenges. Attend in-person at QCon London, (April 4-6) or attend online at QCon Plus (May 10-20).

QCon brings together the world's most innovative senior software engineers across multiple domains to share their real-world implementation of emerging trends and practices. Find practical inspiration (not product pitches) from software leaders deep in the trenches creating software, scaling architectures and fine-tuning their technical leadership to help you make the right decisions. Save your spot now!

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and the Google Podcast. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Style

## Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p