BT

InfoQ Homepage Presentations Panel: Bleeding Edge at Hyperspeed with No Breaks and No Breaches

Panel: Bleeding Edge at Hyperspeed with No Breaks and No Breaches

Bookmarks

Transcript

Hawkins: Thank you for coming, I've got a treat for you today. Perhaps not everyone can get to every single session in my FinTech track, I'm disappointed, but I realize it's a reality. What I've done now is I've gathered together someone from each of our sessions today to answer some of the most difficult questions that you might have about operating in a regulated industry, operating in an industry where security and resilience are paramount when you deal with customer data, etc.

The way we're going to run this is I'm going to ask the questions to start with because I'm nicer than you look and that will make them feel more comfortable. After 10-15 minutes, I'll pass over to the audience if you've got any questions, and you can start asking nice questions to our panelists today. Before we get going, I'm going to ask them each to introduce themselves, explain who they are, who they work for, how regulation impacts their day job or their company. After that, I'll launch into a few questions and if and it's quite likely, I don't ask something that you're interested in, then you will get a chance to throw your question at them after a little while.

Introduction

Maude: My name is Jason Maude, and I am the head of technology advocacy at Starling Bank. Starling Bank is a licensed bank, so we are quite heavily regulated.

Calin: I'm Ana Calin. I'm a Systems Engineer at Paybase and Paybase is a payment services provider, we also have loads of regulation, we have to regulate with PCI DSS. We're FCA authorized and there's others as well.

Quinn: I am Carolyne Quinn, working for the enterprise blockchain firm, R3. Many of our customers are banks and insurance companies and they are heavily regulated but the regulation is still a work in progress a little bit for blockchain, and we do engage with the FCA a lot. We have a regulatory team but the regulation itself is still being worked out at the moment.

Patel: My name Suhail Patel. I am a platform engineer at Monzo Bank, we are also a fully licensed bank and we are very heavily regulated.

Jones: My name is Will Jones. I work at Habito, we're a mortgage brokerage and sort of online mortgage platform, so we're also directly authorized and regulated by the FCA for procedures such as advice we give to customers about getting mortgages and things like that.

Safety of DevOps in the Financial Industry

Hawkins: Suppose I'm a regulator or an auditor or maybe even just a traditional CIO, so I know that the way to make it hard for people to make mistakes or hard for the bad guy to get in, is to make it hard for anyone to do anything. To that purpose, I have a dev team, I have an ops team, and nothing gets to production except via the ops team and preferably nothing gets to production ever, because that's the safest way. I hear your companies don't operate like this. I'm a bit concerned that you might be ignoring some of the segregation of responsibility in various types of audit scenarios. Is it safe to do DevOps in the financial industry? Jason [Maude], do you want to start?

Maude: I would not only say that it is safe, I would say that it is safer than doing it the other way. Obviously, the safest way of developing code is not to do it. If you don't develop code, if you don't release anything else ever then you won't introduce any bugs. Hopefully, no one else will do that as well and then we can all go home. Given that people are going to want to keep developing, the safer way of doing things is to early release often, try and release every day at least. The reason that that is safer is because then each of your releases is quite small, whenever you get a bug or a flaw in your code, you need to look through your code to find where the flaw is. If you're releasing a day's worth of code, that's much easier than if you're releasing three months worth of code, because three months worth of code is going to take a whole huge lot more reading to get through and find the bug. There's also the possibility of bugs, two things which work independently, but don't work together, causing a bug and that's much more likely in three months worth of code. I'd say that from everyone, from the point of view of the safety of the customer, it is much safer to release code early, release code often.

Calin: I also agree with what Jason said, from a Kubernetes world point of view and microservices, if you are using Kubernetes and that's how you deploy your services, you have the option of adding in your deployment manifests rolling updates, which means that every time you make a change to an update, every time you change your image, the old pod will only be killed once the new pod passed all of the health checks. If it doesn't work, then you still have something healthy that was never killed, to begin with. It depends on how you're doing it, but, yes, it can be very safe.

Patel: I agree with both Jason and Ana. As Sarah alluded too this morning, you know, releasing a large, a massive release means that you have to have an ops team and as someone who would probably be in that position, that's not something that I want to be worrying about day to day, running other people's code and managing their release cycles. An ops team usually is far smaller than general engineering. They don't have the capacity to go through every single change line by line and figure out if this is going to be impacting users or what the global impact is going to be. If you have a much faster, more frequent release cycle that's empowered by engineers who are working on the product on a day to day basis it means that they are much more confident with their smaller scope changes, releasing them into production and understanding the failure scenarios.

Availability of Developers with Appropriate Skills

Hawkins: I remember a couple of years ago when we were starting Starling Bank and talking to the FCA and trying to persuade them that we could run entirely in the cloud. One of the things they were concerned about wasn't resilience or privacy or security or anything, it was a key man dependencies and the availability of skills. The FCA were rather scared about us going with AWS, believe it or not, because they thought skills weren't available out there. I think they're probably a little bit better educated now, but there's a point there, isn't' there? We all know that development is 90% copying and pasting off Stack Overflow and the Stack Overflow support for say Java where there's 9.3 million developers is bit deeper than it is with Haskell, for instance. The point generalizes to Monzo using Cassandra, maybe or people who are looking to build their business on R3 Corda. How can you answer those sorts of challenges from regulators or other people about availability of skills, should you get into trouble?

Jones: I think in the early days of the business, as I alluded to my talk, like recruiting for specific sales sketch is always a challenge and I think when you're just starting out as well, the bar you should have for hiring is incredibly high regardless of the technologies you're picking and to some degree the technologies they factor into it, but you're just looking for brilliant engineers, whoever you can get your hands on to get something bootstrapped and off the ground. Then I'm inclined to think that once you get past that initial wave and you're actually building something credible and your product market fit has been ascertained and you're building a scalable business at the point basically, which the regulator considers you a serious entity and is looking to engage with you on a much deeper level, then the set of skills, effectively the regulator's, the key skills you're looking for are necessarily language specific, because now you have a set of more general problems like scaling a business, building disaster, recovery, and fault tolerance. Yes, the language you pick for those is important but I feel like some of these more architectural concerns that you're not going to find a Stack Overflow answer on, you know, how do you split the monolith up into 40 microservices, 12 months, unless that answer does exist, in which case, send me the link. There is this kind of gap period in the middle where you might have a skill shortage, but I think you can mitigate it, especially if you are building like distributed applications using microservices. You can play cards, like the polyglot card and so if you reach a point where you can't find the skills, then you might have to make a hard choice and pick a new stack, but at least you've architected the system in a manner that lets you make those choices, whereas if you're favoring more traditional approach, building a monolith that you don't have a massive velocity in terms of releases and enabling you to change it, then you're probably going to run into those problems at some point, and you're not prepared to adapt. Most of these traditional challenges is all about time to release and time to adapt and those are the problems you should be solving. Hopefully, those problems are independent, the tools you pick, and if you solve them, they should enable you to adapt to any kind of challenge the regulator might bring in that regard.

Patel: If we look at a lot of legacy banks, they're having the same challenges, they want to hire COBOL and mainframe developers, which are equally hard to find in the market, maybe even more so than Cassandra and Kubernetes engineers nowadays. We believe in hiring just engineers who are good at particular technologies and by leveraging technology that we are using, leveraging a modern stack, we actually have a larger pool of people that we can tap into because these are exciting technologies that are in the market right now. People who are coming out of university, the next wave of engineers, are getting really excited about these technologies and they're being adopted by a lot of major platforms like AWS, and Google Cloud Platform which brings even more credibility into the market. I don't see AWS releasing a mainframe product anytime soon.

Hawkins: Carolyne [Quinn], do you want to say why a bank would think that R3 Corda is a safe bet?

Quin: Just a bit of context, we developed our software about two-three years ago, so it's very new and we have to work very hard to get people comfortable with using us, so, we run a lot of meetups. We have about three or four people on the developer relations team, who go around the world training people how to use Corda and getting them comfortable with the software and what it does. I'm running Corda trials where people can use our software for six weeks in different use cases, we have one running at the moment in like selling houses basically, we’ve loads of real estate brokers and banks all involved in that.

It's similar for developers who have worked on other blockchain softwares, they may be familiar, a lot of it is written in Java. It's also written in Kotlin and many people aren't familiar with that either but we hire people all the time who don't know Kotlin or they learn it on the job, apparently it's quite easy to learn. In summary, we're working hard all around the world, we just did one in Poland, we did one in India last week, doing meetups and helping developers get excited about Corda which is an open source technology.

Keeping up with the Novelties

Hawkins: I've got a question which I'll throw to Ana [Calin] first. Kubernetes moves pretty fast, so there's new versions all the time, new things, Istio, Envoy, Ambassador, all these sorts of things. Doesn't that introduce a bit of an overhead of keeping up? There used to be this idea that you never run the zero of anything, because it's just a bit too new. It might have some bugs, it might have some problems. Do you still consider things like that or do you just take what comes along?

Catalin: Very good question. Well, there's two points to this, first of all, you don't just go ahead and deploy a new version of something into production out of the blue. You first take it through multiple environments and that's when deployment and continuous delivery come into place. We do keep up with most of the newest versions, it hasn't always worked out and when it hasn't we just rolled back. Because we do deploy multiple times a day, it's quite easy to do that.

Hawkins: Is there a risk that you might back the wrong horse at any point here? You might adopt something very new, and get stuck with it and find out a little bit further down the line that you can't get out of it?

Calin: That hasn't been that case, I can't imagine a situation in which we couldn't get out of a certain technology. It hasn't happened, but we do roll back and forth and then, sometimes we wait for new technology to become more mature before we use it again. We have a lot of flexibility in terms of playing around with tools.

Jones: In terms of being locked into a specific technology there's two stances that I have on that. One angle is someone like a massive vendor like AWS, you've got to strike a balance of exploiting the vendor as harnessing all the stuff they can offer you, and weighing that up against locking how likely or desperately are you trying to ensure that you can deploy to multiple clouds. There are certain environments to which they may be required from an especially tight regulatory environment, even in our environment which is quite tightly regulated, it's not required. It can be worth evaluating the tradeoff and I think in lots of cases it is worth leaning into someone like AWS especially at a point in the business where you're focusing on growth that could be really valuable.

In terms of locking into a particular technology like Kube, it’s getting harder and harder because as the practice evolves, and people build tools like Kubernetes they're being built by engineers that appreciate the challenges of lock in. If you look at Kubernetes a lot of the API and the way it's designed, it's designed so that you can minimize the points that we should join to it. If you look at things like exposing configuration through volumes, the only thing you're going to have to couple yourself to then is a file system, which is something you can pull Kubernetes out from without disrupting the app. These days more and more frameworks and libraries and stuff like that is being built in a way that makes it easier for you to couple than not necessarily old, but a previous wave of technologies, frameworks that shall not be named, that you've already baked into the framework. These days it feels like a lot more things are built in that way that you can sit on top of them without necessarily setting roots into them.

Open Banking

Hawkins: Let's talk about open banking. Jason [Maude], maybe you take this one first. There's some spectacularly sensitive data that's being flown around here. From someone's bank balance, you can determine all sorts of personal details or from the bank statement, you can determine all sorts of personal details. People are now giving access to third parties to access that data via my banking APIs. Monzo and Starling both offer such APIs, Paybase are in that game as well, and Habito are in the starting marketplace so on one side of it and then, Carolyne [Quinn] of course. All three are working with a lot of banks that are in this too. How can we do that safely? Is it not just too risky to have this data flying around all over the place?

Maude: Yes, we can do it safely. To give a brief answer to this, there has always been a need to share such banking data and previously what happened in the old days is that you used to print out your bank statement on a piece of paper and then go and show it to the mortgage broker or whoever in order to say "Well here's what I'm earning," and to provide that proof. Then that moved on to the advanced technology of giving someone another organization, your login details for your internet banking and allowing them to go in and screen scrape all your data off a browser and hope that they didn't also authorize a £10,000 payment to themselves at the same time.

Open banking is hopefully a step up above both of those things, in that you can now choose what data you share. You can say, "I want to share my balance with these other third party, but not my transaction history”, or “I want to share my transaction history, but not my personal data, not my name and address“.

The ability to segregate this down and say “I want to share this bit of data but not that bit of data” makes this much more secure rather than less because with just the printout you could lose it or they could just take data from it by writing it down that you don't know, or with the screen scraping they could do goodness knows what. The only other option is to not share banking data at all, which unfortunately doesn't fly when the other third party with whom you want to share the data needs to make financial decisions about you. We could abandon the mortgage market, but Will [Jones] might be sad if that happened.

Patel: In terms of open banking, just to add to that having the control to be able to revoke your data is equally as important. If you give an organization access to your web portal to do screen scraping, you have no idea about what they've done with their data and how often they access it and how long they're going to continue to have access for. Most people don't realize that when you give that kind of access, you lose protections from your bank because you've essentially authorized the third party and given them your username and password, which is their method of authentication. Open banking just by via regulation provides you a better insight into how often these organizations are extracting your data and how are they using it, and what pieces of data they have access to, which is incredibly important, and something that you didn't have with the previous methods.

Questions and Answers

Participant 1: You all operate software delivery processes within a regulated environment. Are there any elements of those processes you would choose not to do if they were optional to you?

Patel: A lot of our software delivery processes go into change management, so ensuring that all changes we make are forward-looking, but reversible. If we need to adapt for failure, if we ship a bad release, we can reverse that decision in a matter of minutes. Just thinking about security, from those two standpoints, even if we weren't in a regulated environment, those are two really nice properties to have in any software organization to be able to ship software securely and also to be able to have the reversibility. Those things would continue even in a non-regulated environment.

Calin: I have a very specific example. PCI DSS asks for organizations to check their firewall rules every three months or every six months. What that means is you have to gather as a team in a room and say, "Do we still want this firewall rules?" We've written them using infrastructure as code, we've restricted them as much as possible, we know they won't change unless we change them. Everything is change controlled, everything can be tracked, we have full audit and still, we have to do this. In my opinion, very silly thing, so, I would remove that.

Maude: I would say I probably wouldn't remove much that's significant about our software deployment procedures. A lot of it is about finding ways that you can fulfill the regulations in an imaginative manner. For example we use things like ChatOps, we use Slack for people to gain permission in a four-eyes manner to initiate a deployment to production. That allows us to fulfill regulatory requirements around having having oversight of this, having an audit log, making sure that we can control things, but it doesn't require us to sign bits of paper or have a giant filing cabinet somewhere. I think it's about finding ways of fulfilling the regulations in a way that does not block you which is definitely possible.

Participant 2: I want to get back to the key man thing quickly because I think that's kind of an interesting angle there. With microservices, we've decoupled the monolith so that an individual flaw, as we heard in the keynote this morning, doesn't mean a total outage, it means that like compartmentalize outage, which is definitely better from a risk standpoint. There's still kind of an implicit assumption in the key man thing that the expertise on the product is locked in the minds of like a small number of individuals. I'm wondering if it might be possible to convince a regulator that with a microservices architecture, it's actually the sort of API contracts of the microservices that are really the important thing to understand and then it might be less important to actually have a key man understanding of that since it can be better defined. Do you think that's possible? How you might go about convincing someone who's not a software engineer that that's true?

Maude: Briefly that could be possible. I still think that you're not going to be able to remove the need for a human to understand what the contract means and how the underlying implementation works. You're going to have to have some sort of rotation between the various teams working on the various microservices so that you can spread knowledge out and not have it concentrated in one or two individuals who may leave.

Jones: This ties a bit to the last question actually which is, it's easy to paint the regulator as a Boogeyman. Habito is broadly aligned, in this case they're arguing the key man thing from a risk perspective, we definitely share that opinion. You always want to mitigate stuff like that with documentation, but there's other ways to look at it. If you're onboarding a new engineer, I think this probably ties into what you're saying. It's good to have strong contracts and have them documented. You probably still want some high level documentation to just optimize your onboarding process that you can get new engineers up to speed as quickly as possible.

From a business point of view, that's selfishly, you're solving a very real problem, which is how can I grow my engineering team effectively. They're just viewing that problem for a different angle, which is what if your engineering team shrinks drastically? It ties into the previous question as well, what would I remove from my deployment process? Probably nothing, because the things I put in that process are there to mitigate risk which is exactly all the regulator's asking me to do to a degree.

Participant 3: I'd like to ask the panel how important is it that professionals like yourselves and others in more traditional regulated industries need to update their education and language use when talking about regulation? To give you an example, I still get loads of due diligence questionnaires, because a lot of our clients are regulated, which asks about the location, my data center, the fire risk assessment you've done on it, loads of other examples and I waste hours of my life trying to explain what an AWS regional, a multi-availability zone is to people who haven't got a clue about this. What sources of education can you recommend to them to update their knowledge about how modern software development works?

Patel: From our perspective, that's going to be a losing battle. If you're looking for like one specific recommendation of technology, we've been quite fortunate in that there have been, there has been a big influx in these startup banks in the regulated market. They've all used common infrastructure and modern software development practices, so not going for the waterfall and having an operations manager and doing a big bang release.

We've been quite fortunate in bending the rules a little bit, but that's not to say that we've been doing anything, like insecure or invalid or unjustified. Sometimes the regulator's impressed with how far we've thought about these things, compared to other like traditional banks, when you look at their answers and when you look at our answers, and how we architect for failure, for example, and resiliency. By tying ourselves into these providers, these vendors like AWS and tying us into these technologies which are open and not closed source allows us to tap into broader communities and discuss these with other companies rather than being locked in to something internal which we need to spend hours and hours explaining to regulators.

Calin: We also had a similar problem recently when we got our PCI DSS compliance. Part of it was training the auditor of how containers work and there was a lot of back and forth, a lot of days spent of trying to explain why this is secure enough based on what the requirement asks if I'm touching upon it on my talk as well. We haven't figured that out from a PCI DSS point of view. I agree that it's a problem. That is going to be my call of action, so let's talk after.

Maude: If you send me the name and email address of who creates these forms, I would be happy to talk to them about what an AWS region is and what and availability zone is and so on in order to try and convince them that what is the fire risk in a place that I don't know where it is, is not a question that you can sensibly answer.

Hawkins: There's something in the AWS terms and conditions which say they can throw you off in 30 days, which is 30 days to re-implement your platform on another cloud provide. Do financial institutions really sign those Ts and Cs?

Maude: Yes, we have to, as part of the regulation for us, we have to be able to show that we have tested being on a different cloud provider, a cloud provider, other than AWS, by testing what is meant by that in the regulation is we have a document that says we can do it, and how we would do it, not that we actually have tried to test it in the software engineering sense. It would be possible, it would be difficult, but it's something that we're going to have to concentrate on more and more as financial institutions the ability to not be tied down to a single cloud provider.

Participant 4: I was going to comment on that because it seems that the European regulators these days are focusing more and more on the concept of an exit strategy from a given outsourcing partner, so, how would you comment on the kind of the dilemma between the need for moving fast using cloud services to make this someone else's problem going in using the maximum abstraction layer or the cloud services to be most efficient towards being kind of agnostic when it comes to cloud services, to cater for the exit strategy concerns from their regulators.

Maude: I would say so you have to weigh out the risks. If a financial institution was told by AWS get out, or if they were told by AWS, your price is going to be doubled, tripled in 30 days' time, then that would be a big problem. The question is, how likely is that problem? I would say not very likely. It's trying to weigh out the that sort of black swan high impact low probability risk and saying how much effort do we have to put towards mitigating that and you have to balance that off with all the other risks.

Quinn: Just to give an alternative perspective, we've recently launched a blockchain network, which is called Corda Network. We work exclusively with businesses rather than individuals and some of the businesses have been a little bit reluctant to be locked into our blockchain network. It's the same points, they don't want to be reliant on one provider. If a lot of their data is then associated with our blockchain network, how would they leave? So, we've set up an independent foundation, especially for that, because we're such an early business. This is a way to show that the governance is not with R3 which is our company, that it's with a completely separate legal entity and that participants can vote for and stand for the boards. Just to provide an alternative perspective on the same problem.

Patel: In terms of development and exit strategies I think in this particular instance, the regulator might be seen as a Boogeyman. It just makes good business sense to have an exit strategy, should you be kicked off, it could happen, AWS could start becoming unreliable, for example. They could be having major issues, they might not have enough capacity to satisfy your needs, having an exit strategy makes good business sense in general. From a software engineering perspective, building abstractions, so that you're not tied into one vendor is, is an important strategic decision as well. Investing in those frameworks and tooling and libraries and education, being locked into like one vendor's primary database, for example, picking on open source technologies. A lot of cloud providers provide the same technology just rebranded differently, Google has its Cloud SQL product, Amazon has its Aurora and RDS products, they are functionally equivalent. It's pretty easy to migrate from one to the other, so if you build your abstractions, loosely coupled, it could be fairly easy to move, and having a good exit strategy makes sense.

Participant 5: Hello, I'm ex AWS. I would think they would probably argue quite strongly that they are not functionally equivalent, as far as when they talk about various different things, but that's just an aside. I was just going to ask about how you would think about infrastructure as code, how that comes under your consideration in terms of regulation. I know it's a slightly forward-thinking concept, but infrastructure as code is going to become more and more important and how it's going to become more important in the future and how it's going to essentially take over from code in a lot of the areas, so we're going to code less and infrastructure more in a sense.

Patel: I don't think I agree with the premise that you're going to code less and have infrastructure as code more. The way I see it is, if you look at how things were done in prior to a lot of these tools, which allow you to specify infrastructure as code were coming out, things like Terraform, things like CloudFormation, a lot of people were essentially just spinning up EC2 boxes and AWS products by hand. What you ended up with is just an estate, a jungle, where you had zero control about who was talking to what and where, and when, and how and making it reproducible was an absolute nightmare.

What infrastructure as code has allowed people to do is to essentially modularize and templatize this, your whole infrastructure, your whole estate. What you can do is you can say is "Okay, I want another copy of this," in another AWS region and you can have an exact copy of that spun up, barring a few modifications. Another thing that infrastructure with code allows you to do is version control which is very important. To be able to show incremental improvements to see who made what decision where and when, and how things have changed. That's pretty important from a regulatory standpoint, being able to show how things have changed over time and also being able to show line by line, exactly how your estate is set out, is pretty important as well.

It's been a far-reaching benefit of infrastructure as code, rather than spinning things up by hand, but I don't think it's a full replacement for code. There is going to be software developed, stuff that infrastructure doesn't fully solve. I don't think it's going to be a full replacement for code, maybe different kinds of code is going to be written.

Calin: I also agree that I don't think it's going to be a full replacement for code, but from a regulatory perspective, infrastructure as code helps with compliance a lot, because one of audit questions you get is, when was this changed? Show me how it was done. There are two big steps, first of all, you show the configuration, the Terraform configuration. Second, you look at the Terraform state and say, "Well, it was done at this time it was done by this person and it's with this version." From that point of view, it makes regulation easier.

Moderator: So always answer your auditor by starting off giving the features out of Git, and then see what remains.

Participant 6: going back to the stuff you talked about, be able to roll back your releases. Obviously, you can't roll back a release if it deleted a bunch of data, so, you're going to lean heavily on immutable data. What technologies and tooling have you found useful in making sure you have an immutable set of data that you can fall back on?

Jones: I actually have the opposite problem. We have an event log, which is immutables for the largest part, which means that if you deploy a new release, it's not going to delete anything, but it might write some new data that an old release doesn't understand. That problem I think it's really hard to deal with because rolling back just never becomes an option, because, sure, you've still got your old container up and running, but if you flip your DNS back to it, then it's going to crash because there's a load of data that it doesn't understand. Back to one of my earlier points, the only solution we found effective is just increasing wherever possible the velocity in which you can fail forward. How do you mitigate and move on from that? The problem is broadly solvable, but it probably requires a wider cultural change.

We've not yet really baked in, which is you only deploy changes that are perfectly forward and backward compatible. Personally, I haven't spent enough time sort of weighing up the tradeoffs of that. On the one hand, you gain immense flexibility, you can automate pretty much all forms of canary rollouts, for instance, but on the other hand, maybe it costs you development time and mental engagement because you have a lot of optional fields or in every case, you have to account for whether or not a piece of data is there. It's not a shift we've made yet, but in general, yeah, we're just kind of clumsy falling forward as fast as possible is something we found better luck with when you do have immutable data.

Calin: I think we found quite a good solution to this. We have our services that perform all of the actions that our API has to do and then, we have our databases and our services are deploy these Kubernetes deployments which are stateless, and all of the databases, they maintain a quorum, and they are deployed at StatefulSets. What that means is, you have a volume attached to the pod and if that pod dies, or you kill it, the data stays there and it waits for a new pod with the same kind hostname to come up, so the data doesn't move.

The other thing that helps with this is in the event that you do corrupt your data, is taking backups, so take backups as often as you can, we do them every half an hour so that if you do end up losing any data, you lose the minimum amount that you can afford. That's a good way of doing it. That's how we've been doing it. If one of the new changes to any of our services is not good enough, and it fails, then it will never become healthy in the first place, so, we shouldn't touch any of the data.

Participant 7: We have now discussed the regulation, mostly with respect to the cloudification and deploying to the cloud, but if I look at other advancements of the FinTech without deploying concerns or things like using blockchain or possibly paying with cryptocurrencies or all kinds of new payment providers or third parties that collect your data and are not may be fully authorized, like banks. Do you think the cost of these more regulation is needed or less regulation than today? Do you think, for example, that today's regulation is sufficient or is completely insufficient? There should be much more in a different way. What do you think?

Quinn: I can speak from our company's perspective. We don't have a cryptocurrency in our company at all, it's just the underlying blockchain or distributed ledger technology. We're not like, there's no regulation specifically in the U.K, like we're working with the FDA laws and they're issuing guidance, but we're not regulated in the same way that anyone else on this panel is. It's still a lot of the regulation is emerging, that's just some context setting.

It's almost like a philosophical question, like do you think it's effective to have more regulation or less? Do you have to have the right incentives rather than regulation? And what promotes the right behavior, basically? I think more guidance is certainly needed in the cryptocurrency space, and where a lot of people may have lost money and it's a bit of a Wild West atmosphere, at least in the last couple of years. At least in our company, all of our clients are really heavily regulated, so no one can engage on our network without dealing with financial regulation that exists in many different countries around the world. Although we're not heavily regulated, anyone who uses our software is, so, kind of a second degree of regulation there. In summary, I think more guidance on regulation is needed around cryptocurrencies, for sure.

Maude: Moving on from cryptocurrencies where I completely agree, I think that the regulatory environment does need to work harder in some ways to catch up with all the new technologies, the new way of banking, the new way of doing financial services. We had to give an example, Starling Bank along with all the other regulated banks had to recently publish data that allowed people coming on online to compare ourselves, the Starling Bank with other banks to see who was better. There were some fairly simple questions there like, how long does it take to open a bank account? How long does it take to get a debit card? How long does it take to replace your debit card if you lose it?

With Starling Bank, if you lose your debit card, you can go into the app, cancel the card, order a replacement, get a virtual replacement instantly and then provision it to your mobile wallet, so, how long does it take to get the card? No time at all, but they want to know about the physical card because for them, they are thinking “no, a debit card is a physical piece of plastic that you hold”, so, you have to take into account postage times.

The regulatory book, by the way for defining all the terms that were necessary to understand this was this thick, and this is designed to produce data that an average customer off the street can just go on to a website and download very easily. There's going to have to be a lot of paradigm shift thinking in the minds of regulators, as we heard earlier with the form about the data center in order to try and apply this regulation, so there's definitely work needed there to keep up.

Participant 8: I think we're all fairly familiar with PCI DSS and encryption and masking of PAN data and credit card data, but when it comes to GDPR PII data, there seems to be varying opinion on the interpretation of the GDPR regulations around whether that data should be encrypted in transit and at rest or not.

Calin: Well, there's one argument to be made, which is just because you've encrypted data that doesn't mean that you're doing everything else in a secure manner. Under GDPR you're not obliged to encrypt data exactly because of this, but under all of the other regulation, you should encrypt the data and if you want any security you should do. In my opinion, GDPR is very vague and it's written in such a way that almost everyone can turn it around and say, "Well, I think I can do it this way." , but not everyone will agree with me, that's just my opinion.

Maude: There's also the question when you actually sit down with GDPR and try and decide what is personal data, it actually becomes harder than you'd think. I could give out a postcode right now, technically, that's personal data, but if I just read out a random postcode, you've no idea whose personal data it is or who it relates to. Me just reading out a random postcode does not violate any sort of data regulation, if I start reading out that postcode, along with someone's name and date of birth and so on, then in a combination that becomes very much personal data that you do not want to leak. Oftentimes, it's not about specifically the data fields, but it's about the connections between those fields and whether you're revealing the whole picture as it were, of someone's personal profile.

 

See more presentations with transcripts

 

Recorded at:

May 31, 2019

Related Vendor Content

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT

Is your profile up-to-date? Please take a moment to review and update.

Note: If updating/changing your email, a validation request will be sent

Company name:
Company role:
Company size:
Country/Zone:
State/Province/Region:
You will be sent an email to validate the new email address. This pop-up will close itself in a few moments.