BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Exploiting Common iOS Apps’ Vulnerabilities

Exploiting Common iOS Apps’ Vulnerabilities

Bookmarks
45:27

Summary

Ivan Rodriguez walks through some of the most common vulnerabilities on iOS apps and shows how to exploit them. All these vulnerabilities have been found on real production apps of companies that have (or don't have) a bug bounty program. This talk is useful for those connected with mobile app development or those who do use mobile apps to work with sensitive data.

Bio

Ivan Rodriguez is a Software Engineer at Google by day and a security researcher at night. He has found many vulnerabilities on different mobile applications and reported them through the popular bug bounty platforms HackerOne and Bugcrowd. He worked for many years as a mobile developer before changing his career and focusing on application security.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Rodriguez: Almost a year ago, on November 14th of last year, I submitted a vulnerability to a company that had a crypto wallet. They have embed some user credentials and their mobile application. With this, an attacker will be able to login into third party servers and query their database and anyone who download download this application could just reuse this credentials and login as that. I don't know if you can see that, but they awarded like $2,000 for this and they invited me to their private program where they would pass me their application before it would hit the app store so that I can check them beforehand.

With this, since then, I started doing this on a daily basis. I was checking different applications, there are a bunch of companies that provide a public bug bounty, where they tell you, "Ok, here's my mobile application, please just check it and if you find something, we're going to pay you money for it." During this year, I found that many apps are making the same mistakes over and over. I found the same vulnerabilities in very different applications and we're going to talk about four of them. I wish we could talk about more, but it's going to take a long time and this is how the journey started. Literally, disclaimer, all that I talk about here, it's my opinion, it's my views, and they do not reflect those of my employer. Standard stuff.

Who I am? My name is Ivan Rodriguez, I'm a software engineer and security researcher. I specialize in mobile applications and specifically on iOS and that is because I was an iOS developer for a long time. Then I moved into security because it's not crowded, there's no many people doing this, and I was interested to earn money because of that. I do a lot of research around mobile applications, like I said, iOS, and I published most of my research on my blog, which is ivrodriguez,com, so you can check it out. There are different topics, there are different researches that I've done in the past, so you can check the blog post there.

I also tweet a lot about this, so you can find me on Twitter @ivRodriguezCA. I retweet a lot of accounts that have to do anything with security on mobile applications and that's for both iOS and Android, but I tweet a lot about that. Finally, during my reverse engineering, I have found that there are some steps that you can automate, so if I feel comfortable with sharing some very bad code, I publish them on GitHub. You can also check my repos over there and sometimes you can get some scripts that will help you automate some of the things for reverse engineering iOS apps.

Today we're going to talk a little bit about reverse engineering an iOS app at very high level - because that could be a talk on its own - so a very high level of how do we decrypt iOS apps? How do we transfer them to our computer? We're going to see some tools to do that. Then finally, we're going to see four different vulnerabilities that are very common in many applications and we're going to see some prevention and techniques and how do you fix them if you already have them on your application. Finally, some resources about this, if you're interested in learning more and experience this by yourself, you could just check them and hopefully, we get to some some questions. Let's start with reverse engineering.

Reverse Engineering an iOS App

Not many people know but when you download an app from the App Store, that app is encrypted, so it's not plaintext on your phone. The first step we have to do is we need to decrypt that iOS app and Apple uses an algorithm that is called FairPlay, which was introduced with iTunes. Back in the day when iTunes was introduced, you were able to download songs instead of buying CDs and Apple needed a way to protect these songs so that you will not download them and redistribute them. They came up with this algorithm that's called FairPlay and they use something similar, it's kind of tweaked but it's similar for the iOS apps. We need a jailbroken phone to do this. That is because most of the tools that we're going to use are not allowed in the App Store.

Also, people might not know but every single process that runs on your iOS device has to be signed by Apple or a developer certificate or enterprise certificate that also go all the way through the chain to Apple. There's tools that we're using to decryot iOS apps are not signed by Apple, so that's why you need a jailbroken phone. Third, we don't really decrypt the iOS apps, we're just going to load it into memory with a dump portion of the memory and it's going to rewrite the binary. We're going to ask iOS, "Look, iOS, you know how to decrypt this apps, just load it into memory dump and we're just going to grab it from memory."

And lastly, once we have this, we built the iOS app, which is transferred to our computer where we can do further analysis. How do we do it? We could use some tools like dump memory, where it takes a file name, which in this case is the binary app, and then an offset of memory that we want to dump. The reason is because the binaries have a header that indicate what's the portion of the binary that's encrypted, it's not the entire binary that is encrypted. With this information, we can ask ioS to load it into memory and then we dump that portion of the binary that is encrypted.

Luckily, we don't have to do this by hand, we don't have to figure out the headers and all of that, there are tools that we can use for this. These are three of the most common ones but there are many more now. The dumpdecrypted or bfinject or frida-ios-dump, all of them are open source and you can check them, they will allow you to target an application and it will launch it for you, it will decrypt it, and then you will end up with a decrypted version of an iOS app. For the non-iOS developers in the room, the iOS applications have a .ipa extension but in reality, they're just zip files, so what we could do is we could literally just rename that file into .zip and extract all the files that come in with your application.

This is going to be very important because sometimes developers don't realize that every file that they include in their app can be easily accessible by a potential attacker, and we're going to see that. Once you have the decrypted version of your app, you can perform two depth analyses, which is the dynamic analysis and the static analysis. Most of the people that I know love the dynamic analysis, it's very fun, you're sniffing the traffic of your application. Most of the applications would need a server to connect to read data or to write data. This is very fun where you get to see the packets that are going through, you're going to change some values, you're going to see how the server reacts. Change some values again and see how do the app reacts. This is a very fun part of the analysis.

Also, many companies are interested into games. For example, they have open bug bounty programs where they will say, "Look, if you find a vulnerability – a a very common currency in game is coins, let's say if an attacker can generate thousands or millions of coins, they can get an unfair advantage because of that – we're interested to see any of those kinds of vulnerabilities." This would be part of the dynamic analysis where you literally are playing the game to see how it works and to see if you can circumvent some of the restrictions and you can figure this thing out.

There's also the other side, which is the static analysis, and many people don't like this one because they say it is very boring. I sometimes agree but I personally love this part. Actually, all the vulnerabilities we're going to see today were found using static analysis, I did not even run the app to find them. Then, you will have to sit down, you have to read the decrypted version of the app. You can see the machine code, you figure out, "This method will jump in here, this method will jump in this, the application does X when I tap this button," things like that. You also read through maybe thousands of files that the developer included with a binary and so you have to go through fonts, you have to go to images, you have to go through configuration files. This is very important because sometimes developers didn't realize that if they add a configuration file with a hard coded credential, it's going to end up in the binary, so it's going to end up in the bundle and then an attacker could just extract that. That's exactly what happened in the first vulnerability we're going to see today.

Vulnerability #1

When you unzip your application, you end up with all these files, you have folders like the frameworks folder which host all the third party frameworks that you're using in this application. It has all the images that you use, all the custom fonts that you use, but especially the configuration file like .json, so .plist, the .xml files, they're going to be there in plaintext. In this case, this is a very common mistake that a lot of people do. They embed a configuration file sometimes with things like a private key. Yes, is intended to be private, as the name suggests, it should be private. How do we end up in this situation?

What's happening is that a very common pattern is that the companies don't have the resources to build their own back end system, for example. They hired the services of a third party, let's call that cloud database, for example. What this provider is going to do, it's going to say, "I'm going to give you a certificate with a private key so that you can access your resources in our cloud and that certificate probably has an SSH key, like a private key, so that you can establish an SSH connection into the server."

The problem is that many developers are embedding this certificate directly in their app and making the API request directly from the app to the third party. If you look at it from the user perspective, this is not different from connecting to your server or anything, the app is just connecting to somewhere. From the attacker's perspective, that means that your certificate is living in the same client that I'm able to see in plaintext and so, don't do this. Anyone could have access to this credential, so this is exactly what the first vulnerability that I show you, the screenshot for, they were doing exactly this.

How do we fix this? What's the proper way to handle this pattern or architecture of the system? We're going to generate the certificate from the server and we're going to download that but we're going to build a middleware. We're going to build our own infrastructure, our own server where we actually store the certificate. Then, this server will expose a public API that our application is going to connect to.

This has many benefits and some of them are, you're going to be able to handle the authentication part on your own, you're going to give them a username and password and then you're going to return a token or a session ID or something that you know that they are authenticated to use your API. That's number one. Second is that you're not exposing files like the private key that your third party cloud service is requiring you to access their services. You're going to do that directly from your server which doesn't have a front end or it should not be exposed to the internet and that's the one that's going to host this private certificate. This is the way you should do it, this is the way you should build an architecture of the system instead of embedding directly private information into your mobile app.

Vulnerability #2

Vulnerability number two. For the non-iOS developers in the room and for those that jump directly into Swift, this is Subjective theme. To showcase the vulnerabilities, I've built my own vulnerable app so my own application is vulnerable to all of this vulnerabilities that we're going to talk about. First, we need to know that there's an API from iOS that allows an app to be launched from a different process, for example, a different application. Let's say we have Application X and that application has some sort of feed and in the feed, you find a link of a tweet, for example. When the user taps on that link, the pop up is going to show and they're going to say, "Do you want to open this in the Twitter app?" Most of the users are going to say yes because the experience is going to be way better if you open that link in the Twitter app because the Twitter app was literally built to show tweets.

This is a very common pattern and it's called URL schemes. As an application, you register the URLs that you're supposed to be handling, that you are allowed to handle, that you know how to handle. The convention is that it's usually the name of the application, ://, and then you can send some arbitrary parameters to the application that you want at launch. In this case, and this is a real world scenario, where the application was listening to a URL scheme and it will look for the keyword news and then forward slash, and then everything that came after that was treated as a valid HTML content.

Not only that, after treating it as a valid HTML content, it would load a WebView. For those who don't know WebView, it's essentially a fully functional web browser with your native application. This app will search for the key word news and then will treat the entire primers as valid HTML that it will load in a WebView with vendor application, not send them to Safari, but load them directly in the application. What could go wrong with this?

This is my application, I named it CoinZa. "Coin" because I was building a crypto wallet funny application and "Za" because I love pizza. Let's imagine that the user is just browsing around, they're just navigating websites, googling something, and there's a link. They find a link and Safari says, "Do you want to open this in your native app?" Let's imagine this is your bank app. Let's imagine you're going to have a rich experience if you open this link in your web app. The problem is that since the HTML is controlled by anyone that is opening this URL, they can pass in whatever HTML content that they want. In this case, it's just loading a Wikipedia article, which is fine to just illustrate this, but someone can build or craft a login screen for your bank app, for example.

If your bank application is loading arbitrary HTML content within their own app, people are going to trust that, people are going to inherently just say, "Yes, I'm just going to enter my username and password again," because it's your bank application, that it says, "Sorry, we lost your session, can you please re-login?" Most of the people are going to do that. The real problem is that this is a fully functional web browser, but we don't necessarily have a navigation bar. Nowadays, many people are aware that there are phishing attacks where they're going to try to make you enter your username and password in a funny web email, for example, but they are learning that, "If the URL looks funny, I'm not going to enter things." Whereas here, that's not possible because the logo is going to be your bank application, it's going to be your crypto wallet, it's going to be whatever native app that you're launching and then people are going to inherently just trust that and just going to enter their username and password again.

To illustrate this, this is the way URL schemes work where you establish the name of the application, in this case, it's coinza://news and then some HTML parameters. In this case, it was just very simple, we just change the current web page to a Wikipedia article but again, as we said before, it could be something more dangerous and you have to encode the URL so that it was parsable when it's passed to a different application.

How do we fix this? First, never ever trust arbitrary code that you don't own or that you're not generating. If someone else can send you this information, if someone else can send you parameters, don't just trust them, never load them into something that a user can interact with. URL schemes and WebViews are a very dangerous combination and you should be careful when you're using them. Just make sure that the content that you're loading is trusted, that you know that it's executing the right things and never load arbitrary code that you don't know where it's coming from.

Lastly, if you need to react to dynamic actions from a URL scheme passing parameters, just maybe have a whitelist of actions that you're allowed to take and then if the content or the parameters that you're getting doesn't align with any of those, you're just going to say, "Sorry, I'm not going to load this thing."

Vulnerability #3

Number three. This vulnerability is not specifically for third-party applications but it's shown here because one of the most popular third-party libraries for accessing the network was vulnerable to this. What's happening? When you build a browser, the browser has to open any website. You don't know which websites the user wants to open.

When you build an app, you probably know which services this app is going to connect to, you know probably just one server or at least a handful of servers that you're going to connect to, so you know the URLs that are expected to be queried from your application. What many people do, and I suggest you to do it because it's a very good approach or a very good feature to have, they're going to do what people call SSL pinning, certificate pinning, or public key pinning. There's actually a huge difference between pinning the certificate or the public key. Anyways, the core idea is that you know the website that your application is supposed to connect to, so what you're going to do is you're going to get that same TLS certificate that your server has and you're going to embed that with your app.

Then when your app tries to connect to the server, it's going to say, "Ok, server, send me your certificate," and you're going to check that information against the information that you have embed, if it's not the same, I'm not going to talk to the server. That's it, plain and simple. When we request, we asked for the certificate and if it's the same one that we have, we establish a connection. If you have someone in between that is going to make a fake certificate for the web that we're trying to connect to, ideally, this mechanism should stop that request and not allow any other certificate than the one that we have.

The real problem is that, in this case, the third-party library was vulnerable to this. When you build this logic and you don't do it correctly, you open the door to someone giving you a fake certificate and you accept in there. I've seen this – and I've seen this in at least six different applications where developers start building this feature and they are like, "We're going to do this later in the next cycle," so they just have a to-do within the method that's supposed to handle those certificate checks. Then what that does is that it's just going to return true to all of them and it's just going to accept any certificate that was sent.

With that, an attacker – the attack is called the man-in-the-middle attack – is going to be able to sniff all your traffic between your application and your server and it's not just the URL that you're connecting to. It's all the data, the username and password, your bank information, your health information. Everything that is between you and the server that you're trying to connect is going to be able to be sniffed.

To illustrate this, I'm going to use a tool that is called BetterCAP. What BetterCAP is going to do is it's going to ask me for an IP that we're going to call a target and we're going to launch it and I'm going to ask it to sniff for HTTP traffic.

Something that is important to see here is this is a remote attack, I don't have physical access to the device, all I have is the device IP. Let's imagine that there's an attacker in a public coffee house, for example, and they know the IP, so somehow they managed to get the IPs of the users on their phones. If your application is vulnerable to this specific issue, what BetterCAP is going to do is it's going to start listening to all the packets that your phone is sending and if one of them is from your vulnerable app, your app is going to say, "I need to request this URL, can you send me the TLS certificate for this URL?" Then the beauty of the internet is that whoever responds faster, that's the packet I'm going to process. Since the attacker is probably closer to you than the server, it's going to respond faster, so your phone is going to get that packet before the actual server. In this case, it's going to have a fake certificate for the URL that you're trying to connect to and since we're vulnerable to this issue where we accept any certificate, they are going to be able to sniff our traffic.

It might be a little bit difficult to read but I have a Simian version here. What's happening here is that my application is doing some mimic of some traffic. I'm going to ask for some content from GitHub user account, usercontent.com, and BetterCAP is going to see that request and it's going to say, "Ok, I'm going to create a fake version of a TLS certificate for this website, for this URL, and I'm going to serve that to see if your application accepts it." The premise is that this application is vulnerable to this and it's going to accept those certificates and now BetterCAP is able to see the request that I'm making, which in this case is just a static file, but it could be anything, like a real data that your application is sending.

How do we fix this vulnerability? Keep your third-party libraries up to date, I cannot stress this enough. Almost every app that I've checked has at least one dependency that is outdated. Being a developer, I know it's a huge headache to update this thing and sometimes they break your app. Most of the PMs would say, "If it's not broken, don't fix it, just leave it as it is." That's why many of the third-party libraries that are in mobile applications are very outdated. I'm talking about two, three, four, or five and even six years outdated and this could lead to potential vulnerabilities like these.

Also, I understand that it's hard to manage because some applications have 10, 15, 20 different third-party libraries that you have to keep up to date. Make sure to do that. Then also, be careful to implement in this logic SSL pinning or certificate pinning or public key pinning. It's a very useful security measure that you can add to your application but be sure that you know how to implement that because a bad implementation of this logic could introduce this vulnerability where somebody can send you a fake certificate and you might end up accepting it.

If you need to do it, this is a very good library, TrustKit. They built it and they use the proper pinning, which is the public key info pinning is the one that they use, they explain why and they explain how, so you can definitely use this in your projects if you want to implement certificate pinning.

There's also more information on the OWAS project, the Open Source Web Application project. It will give you more information about why this is a good idea, how to do it, what's the best way to pin, what information to pin. If you want to read more, you just read about this.

Vulnerability #4

The fourth vulnerability. Back to the first vulnerability that we saw is that this application is accepting arbitrary HTML code. In this case, we're going to pair that with a very dangerous vulnerability from an old iOS class, which is called UIWebView. As you can see in the documentation, even Apple says that, "Be careful when you're using this class, specifically this method with this class, to access local files." We're going to see why a little bit later, but as you can see from documentation of this, this class was deprecated in iOS 12, which was released last year, but I have not found one app that is not using this yet. There are many apps that are still using a UIWebView. I will suggest you to use this other method that is called load HTML with a base URL.

The problem is that when it's not set in the documentation is that having a null or an empty base URL is the same as not having that, so this two API calls are equivalent. As a not, don't use any of this and the reason is because when you don't define a base URL for a web view, any JavaScript that is running on that WebView have access to any file within your file system, within your sandbox of your application. All your documents folders, all your temp folders, your cache folders are accessible by JavaScript executed on your WebView if you don't define a base. You're basically finding a scope for your JavaScript. If you don't define a scope, it's open game for any access to any file that you have within your bundle.

Something like this could happen, someone can write a payload, in this case, will search for database. During the static analysis, you can find that an application saves a database for, let's say, your message. It's very common that your crypto wallet would just create a local database where it stores all the private keys for your cryptocurrency. This is a very dangerous application to have loading a WebView with no scope, because JavaScript could just search for that database and just send them remotely, so it just read that file from the local disk and just send it to a remote server.

This is how that payload looks when you encode it, that's not important. Here, I'm going to show how to do that. Again, the user is just surfing around and then they're going to try to open a link and that's going to open a WebView within the vulnerable application. I'm not going to show you the database being sent to a server because that's kind of difficult to show but here, we're able to read this database that is in the Documents folder of your application. Even though this is just a simple website, this is just a WebView that is loaded, it has access to all of your file system within your sandbox.

How do we fix this? Don't use UIWebView anymore, there's no need to use that, there's a new and better class that's called WKWebView. It supports everything that the UIWebView supports and it doesn't have the same vulnerabilities. It doesn't have the same vulnerable APIs. If for some reason, you still need to use the UIWebView class, maybe old versions or compatibility issues, don't use these methods, don't use the load requests for local access. If you're using the base URL method, if you don't know what's the base URL, you can use this trick where users have "about:blank," which is a standard way on the web to load up a blank web page and that will give the scope to your WebView to just whatever is loaded within that WebView and that's all the resources that JavaScript can access.

Conclusions

Some conclusions about this. Add security assessments to your build cycles. If you have a QA cycle, for example, add security assessments to that, that's a very good idea to have someone either internally or a third-party pen tester go through your app or your apps, and check some of these vulnerabilities. Keep your third-party libraries up to date. I know I've said this before but, again, I've seen so many of these instances where applications are vulnerable just because they have an old version of a third-party library. Be careful when you copy and paste data from just the internet like Stack Overflow or something like that because they might have vulnerable portions. Especially when you're implementing very complicated logic like SSL pinning and things like that, you need to make sure that that code works and it's not vulnerable.

Lastly, and this is a very important one, have a public bug bounty. If you don't want to pay researchers for finding vulnerabilities, that's fine, but at least have some way for someone to contact you. I moved from just working with public bug bounties to just randomly download applications and searching for vulnerabilities and there are many applications that are vulnerable but there's no way to contact the vendor, there's no way to say, "Look, developers, can you just fix this, please?" I'm not asking for money in return or anything, I just want you to fix this for your users.

I usually just find out a random like contact@website.com email or a contact form from their website that goes to, say, sales and they might not know how to react to someone saying, "I find a vulnerability in your system or in your app," and they just might discard this email. Have some way so that anyone can contact you and give you this information because this is a struggle that we have in the community where we find vulnerabilities and we don't know how to reach out to people because sometimes they're like, "We're threatened and we're going to sue you, we're going to send the police your way," or something like that, whereas we're just trying to help the companies, so this is very important.

Resources

Some of the resources that you can find – the OWAS program, the project, it was built for web but now it has a mobile branch where you can find a lot of resources for how to securely build your iOS or Android applications, how to pen test the applications yourself or just start learning about all this things. Then finally, I have my own version of this. I call it a course but it's a bunch of write ups about very common vulnerabilities, but it also takes you from not knowing how to decrypt an iOS app to all the way to modify that. I did this because I found that there are many people that are interested into learning all this thing but they don't know how to start, they don't know where to start, so this will teach you all the way from starting all your tools, creating your environment, all the way to modifying your application or third-party applications at runtime.

It's free, it's there to use, it's open source. I recently found that there's a professor that is using this course and some write ups that I have for one of his courses, which is fantastic, this is exactly why I write this kind of stuff so that more people will learn about it.

Back to the initial story, after that, they invited me to their private program to see their app before it would hit the market, many other companies reach out and say, "Ok, can you please also look to our apps?" There are not many researchers looking at the mobile apps and many of them will offer to pay you double for the vulnerabilities if you find something on their mobile app. If you're looking for doing something on the side, this is a very interesting job to do.

Questions and Answers

Host: Which vulnerabilities are the most popular?

Rodriguez: The most popular one that I've seen is the hard coded credentials.

Host: Hard coded credentials right in the app. Basically, you can get them just by reverse engineering, just by running the strings command for your app?

Rodriguez: Yes, you could go from just a simple plaintext file as I showed here, all the way to having an obfuscated string within the binary that at the end of the day, if I spend enough time, I will be able to reverse engineer that and figure the values and I would to be able to use those to log in as the company.

Host: Also, from my experience, the cryptographic keys, when you have some encrypted data in your app but you put a key somewhere in a list near the data.

Participant 1: Thank you, great talk. You talked a little bit about certificate pinning and you do reverse engineering. Just a reminder for people, certificate pinning helps prevent man-in-the-middle attacks on the TLS connection. As a reverse engineer, are you finding it to be an impediment to figuring out the protocol or are you able to take out the logic that checks the certificate or replace the logic that checks the certificate with your own certificate?

Rodriguez: Yes to both questions. I find it as impediment because usually, you just want to do the very initial dynamic analysis where you just install your own certificate on your device and then you want to sniff your traffic. The first thing that it says is, "I cannot see the traffic" or "It's blocking the connections." It's, "There's some certificates pinning, I have to fix this," "I have to figure this out." To the second question, yes, it's non-trivial because some of them have very intricate logic around the pinning and you have to break that before you can sniff the traffic, so it's a game where the more layers that you have to your security, it will benefit the company, but someone else with enough time and resources will eventually get there.

Participant 2: Can you talk a little bit about the differences in the protection offered by Apple's App Transport Security settings and certificate pinning? What are the different attacks that you would be protected from under both?

Rodriguez: App Transport Security is Apple's way to ensure that your application is connected over TLS. They enforce that every connection that your application is trying to make has a TLS connection on it and it has a well-defined set of accepted cryptographic algorithms that you can use for that TLS. For example, you cannot use TLS 1.0 or 1.1 for example. It guides you towards what type of TLS connections you can have. Certificate pinning is an extra step to that where you will only accept the one or the two or the three certificates for your specific domain within a TLS connection, so it's a step beyond what Apple is doing with ATS.

Participant 3: Thanks for the great talk. I think it was very practical, thank you very much. This is a question from an app user point of view rather than app developer point of view. I get really very scared when an app loads a web UI because I can't see the URL and even if I could see the URL, I can't see the certificate the URL is getting served from. This is a problem across all platforms, not just iOS. I'm just curious why the industry has fallen so behind about this very important aspect because it's a simple thing. A content being served from some server which could do something to your device and there is just no way of knowing where that content is coming from. It seems like a big thing, why are WebView allow it in the apps?

Rodriguez: I don't really have an answer for that but my experience is that there are many companies that are doing this because of time. They cannot build something for both iOS and Android very quickly or they might change their minds down the road so they build WebView side of things because they can dynamically change that, they can change that in the server side. If you're loading just a WebView, it could look like anything, you could just change that page and you don't have to resubmit an app to either stores. It's about time and things like that, simplicity, those are the most common aspects I've seen for people using WebView.

Participant 4: I do have a question about this process. Can we kind of automate it? I know LinkedIn created QARK a couple of years ago, that's for Android only. The specific things that are for iOS, is something like QARK available for iOS as well?

Rodriguez: There are not many services providing this automation. There are tools that will help you but it's not an automated process in the entirety. For example, I have scripts that will decrypt your app and transfer it to your computer on its own but the actual analysis, that's a very hands on thing. I think mainly because of the tight ecosystem that Apple has, not many things will run on devices.

I know of companies that have this, but they are not publicly available to anyone to use the services. I know of at least two companies that regularly download apps from the top 100 apps or free apps or something like that and they have some static analysis on all of them. They have like a farm of devices that are downloading because you have to have a device to download an iOS app. They have farms with devices where they download the applications, all of them are jailbroken. They have custom builds, they have custom scripts running on them but it's proprietary so it's not available for the public.

Participant 5: Most apps are using JSON HTTP based request. Did you encounter any apps that used, for example, Protobuf and is Protobuf making reverse engineering any harder or man-in-the-middle attacks any harder?

Rodriguez: Yes, most of them are using a REST API or even a GraphQL but it's text. Protobuf is more of an encoding of the actual data that we send. Yes, it kind of makes it harder but there are tools like Burp. Burp is a proxy application that you can use to sniff the traffic. It's mostly built for web but you can also use it for mobile applications and they have a built-in parser for Protobuf. Even if the traffic is encoded into a binary code, it would be able to just show you the plaintext, so it's not that big of a deal.

Participant 6: Just curious, in your experience, if you've seen any more vulnerabilities with apps that are a result of coming from a cross-platform solution like React Native or Flutter or something like that?

Rodriguez: The first thing to say about that is that it's easier to go through the source code because at the end of the day, it's just JavaScript that is bundled in the app so it's not down to machine code, it's JavaScript that you can read very well. It's easier for a reverse engineer to check that bundle, to check that application. If you see the binary that has very tiny code that loads a WebView, then everything goes to the JavaScript. In that sense, the entry barrier is easier for anyone to reverse engineering that type of app. Second, some of the developers don't realize that that is a plaintext JavaScript, but it's going to end up in their application, so they hard code a lot of credentials in those type of apps.

Including to your first question, what's the most common one, that is part of that, the hard coded credentials within JavaScript code, that's very common. The second is cross-site scripting vulnerability, as we saw in the last one, it's a cross-site scripting vulnerability where you're allowed to execute code or access data from a different domain than your WebView. These types of frameworks would allow you to do that because at the end of the day, they cannot restrict to just one WebView. They have to restrict everything within your application bundle, so you can see some cross-site scripting vulnerabilities introduced because of that.

Host: How long does it usually take you to reverse engineers applications? I know that it's an endless journey but do you have some timing in mind?

Rodriguez: It depends on the functionality. For example, I'm very interested in crypto wallet applications. Most of the applications that I've done in the past two or three months are crypto wallet kind of applications and they're kind of simple because most of the logic is within the application. But, at the same time, you have to understand what are the cryptography parts that they are using and all the logic that they are using to protect the crypto coins and things like that. It could take a week to figure out perfectly an application. Also, part of my research, it's around applications that are spying on users, for example, and that will take a month or two.

Host: When you say a week or months, it's not like a full-time job.

Rodriguez: No, I do this on the side. I have a 9:00 to 5:00 job and then I go back home and just do this.

 

See more presentations with transcripts

 

Recorded at:

Apr 03, 2020

BT