BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Rethinking HCI with Neural Interfaces @CTRLlabsCo

Rethinking HCI with Neural Interfaces @CTRLlabsCo

Bookmarks
49:11

Summary

Adam Berenzweig talks about brain-computer interfaces, neuromuscular interfaces, and other biosensing techniques that can eliminate the need for physical controllers. Berenzweig also approaches the emerging field of neural interaction design.

Bio

Adam Berenzweig is the Director of R&D at CTRL-Labs, building the world's first practical, non-invasive neural interface. Previously he was the founding CTO of Clarifai, an early pioneer in deep learning and image recognition. From 2003 to 2013 he was a software engineer at Google, where he built the music recommender for Google Play Music, and worked on Photo Search and Google News.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Thanks for coming. I'm pretty excited to talk about CTRL-labs today to this audience. I think we've been slowly starting to build public awareness about what we're doing. We're really at a time of the company where we're transitioning from lab research prototypes into real-world usable products. And so, taking this next step and thinking about how to introduce this technology to developers who can really make use of this and trying to build products around it is really what we're all about right now.

So, I want to start with my friend Tom. So, this movie is 15 years old now, I think. But it's still, I think looms large in the public imagination about what the future looks like, and particularly what the future of human-computer interface looks like. It's actually on the plane. I'm over here, they're showing it. So I watched part of it again. So, I wanted to call it out a bit to dig into this particular vision of the future of our way of interacting with technology, and point out a few ways in which I think A, it really does seem like we're at the cusp of this revolution; what we can call it, you know, wearable computing, spatial computing and body computing, but the idea that the computer has sort of just become part of our physical environment, our experience, and are no longer separate devices that we interact with. But just as interestingly, at least to me, I think is the way in which this particular vision of the future might be a little bit wrong. Basically, ways in which it doesn't really align with, I think maybe how some of it will go.

So, for starters, just to sort of tease this a little bit: gloves, we don't need gloves anymore. We can just capture the output of your motor neurons directly. That's what we do at CTRL-labs and you just have your hands as the interface, and you don't need any other hardware. But also from a design perspective, and this talk is really about the design issues involved with making this transition to a new kind of interacting with machines from a design perspective. You know, this idea of waving your hand around, and … I'm making a lot of noise here. You can get really excited about it. But it's a terrible experience for more than a couple of minutes. You know, the story about filming the scene with this big flashy user interface was that they had to stop filming several times because his arms were getting tired and, you know, he's a pretty buff guy. So, what about the rest of us?

This is a lesson that's actually been learned a long time ago. You know, the idea of holding something up in front of your face then using it as a fine-grained control device. It goes back at least to the 60s and 70s, when there were these light pen displays and it was quickly discovered that this is unsustainable, you just can't work like that. But I’d just like to point out that this idea remains pervasive. It really is still the dominant paradigm that we think of when we think of spatial computing, and most of the demos in VR and XR of the systems, when you look at their demos, it's people waving their hands around in front of their face.

And there are obviously use cases where that is appropriate. That's exactly what you want to do; if you want to be big and flashy, because it's a performance or if you want to get exercise or if you want the physicality of the motion to be part of the experience. But for long-term sustained use, the way that we use computers all day long in our lives, I think this is maybe needs to be rethought a little bit.

So, this is how I'd like to structure the talk. Maybe I'll be preaching to the choir a little bit here, but I want to make the case that … it seems like we really are on the cusp of another major paradigm shift in human-computer interaction, and what that would look like and how we'll fit into it. And then we'll talk about neural interfaces, specifically introduce the ideas of what we're working on the CTRL-labs, and then I'll get into the design issues involved with making this transition.

A Brief History of User Interface

So, in the beginning, was the command line. That's an essay by Neal Stephenson. Raise your hand if you've heard that. Oh, you've got to go read it. It's so awesome. And it's worth making the point here that I still live here on the command line. Things don't really go away, so we just always have to keep in mind that as we go through iterations of these paradigm shifts, they kind of layer on top of each other. And as like Gibson said, the future is here just not evenly distributed and the past is still here but not evenly distributed. So, if we just think about the shift between console command line based computing, and gooey based computing, I want to just break that down in terms of what I think are the essential enabling technologies that made that shift possible.

If you think about it, an HCI system has an input and output. And I think that most of the major shifts in HCI have really happened when there has been some critical basic technology that's been introduced, both input and output that pair together in a particular way to make a new interaction paradigm. And so in the case of the Mackintosh introduction of Windowed computing, it's obviously cheaper rasterized displays, and the mouse.

So, when one of these shifts happens, it opens up an enormous new set of possibilities. But it also presents an enormous challenge for users who now have to learn a whole new way of computing. And Minesweeper - I have it on good authority from Riordan, who was at Microsoft at the time when this stuff is being built; it's part of the operating system and every version of Windows ships with Solitaire and Minesweeper. I actually don't know if the recent ones do, but I think so. But certainly in the early days they were there specifically to teach people how to use the mouse, and particularly minesweeper was the difference between left and right click, if you remember how to play, you have to right-click to mark where you think bomb is. And solitaire is all about clicking and dragging. And so, the idea of gamifying the basic interaction mechanics in order for people to get used to it and essentially train people how to use a new interaction paradigm is something that I think that we're thinking a lot about at CTRL-labs, and probably everyone else who's involved with this kind of work.

So, okay, let's get into the next one instead talking about the roots; let's pay homage to the early pioneers. We've got that … [inaudible] in there somewhere, Jarron, and so this progressed for a long time in the big clunky phase where lots of amazing prototypes and experimental stuff was built, and it didn't really go very much farther than that until recently, because probably things were too big and expensive. I think for the most part there were some amazing experiences built back then, but it was really costing and size and weight and practicality, I think more than anything else. So then these things started getting smaller and smaller and more fashiony, and then, becoming sort of real … finding some real markets and then, perhaps we've really arrived once you get shacked with this device; you're there, or at least, you're like at the cusp of it. So I think it's pretty clear to observers of technology, and certainly the big tech companies are investing super heavily in this transition, because it's well recognized that this is now almost here, what's that going look like, and there's some amount of excitement from many quarters.

So, this is a slide about the input side of this transition. If the display can sort of disappear onto your head, how do you do input? You're no longer hand holding a physical device. The size and shape and weight and cost of this thing is completely dominated by the screen, and if the screen goes away or somewhere else, then that's great; this thing can get smaller and cheaper and can disappear into my clothing. But what do I touch to do input? How do I actually interact with software? So, there's been decades of research on this stuff and basically it breaks down into motion capture technologies of one kind or another, with sensor gloves, different kinds of camera-based techniques, this device over here … Is this going to work? So on the bottom right is the knuckles, which is a cool VR controller that Valve is going to really soon and it basically has some capacity of sensors in the grip and a nice design so that you don't have to physically hold it in your hand, because it sort of straps around your hand like this. And so then they can do some basic finger sensing in addition to having classical buttons.

And leap motion has done some really amazing interaction prototypes with this kind of thing. It's sort of pointing the way of what this could look like. But I want to just point out here, at least, again, we're still a bit in this paradigm of waving your hand around it and in front of your face. And spatial computing, at least from an input side, is mostly conceived of as literally reaching out and grabbing objects in a virtual space that's mapped onto real space around you. Same thing. We're back here.

So, I'll just want to propose and highlight an alternative vision of what input could look like with these embodied embedded computing systems. This is an excerpt from a book "Rainbow's end." By show of hands, has anyone read this? Okay, yeah, good, two at least. So, it's a fun book. Vernor Vinge is awesome. I read that he was kind of hanging out with actually a lot of the early MIT wearable computing pioneers when he wrote this and it really absorbed, I think, some fantastic ideas. The plot is that this guy had Alzheimer's-it's in the future at some point- they've invented some technique to basically cure Alzheimer's by putting the brain back into neuroplasticity. And so, he basically becomes young again, cognitively young again as Alzheimer's cure, but it's sort of a time travel scene because he's was an old guy, He grew up in our era and now he's sort of fast-forward into the future where he is like a kid again, at least mentally and he has to learn how to use computers in the future.

And so, this is a scene talking where he's talking to a kid who's trying to teach him how to use his computer, which is just sort of invisible, like sensors and stuff all around his body. And the kid says, "I'm a kid. I grew up with ensemble coding." That's Vinge's coin of phrase for what this interaction mode is like. And "Hey, even my mom mostly uses phantom typing." "Well, you know, you and I are retreads Juan. We have learning plasticity and all that. Teach us the command gestures or eye blinks, whatever ". "Okay, but this is not like the standard gestures you've already learned. For the good stuff, everything is custom between you and your wearable. The skin sensors pick muscle twinges other people can't even see. You teach your epiphany and it teaches you". So, that's what we're building, the CTRL-labs.

Introduction to Neural Interfaces

So yeah, let's talk about neural interfaces and how I think the technology can maybe lead to different kinds of interaction experiences and present a whole new class of interaction problems that fit in nicely to this HCI paradigm shift, but in a different way than traditional spatial computing light. So broadly, the problem that we're trying to solve is the huge gap between the bandwidth of human input and output. So we talked about this, maybe a reverse from what the usual HCI. So when I say input, I mean input to the brain, not input from the person to the computer. So input ... our visual and auditory system is very high bandwidth, and the technologies that we have today to put information into the brain is very, very good and very highly evolved and we've got tens and hundreds of megabytes per second you can put into the brain, and the output is a trickle. And I think anyone who's used modern VR systems, if you feel those immediately, you've got this incredibly beautiful immersive experience, and you're looking around and then you've got these sticks where your hands should be.

So, this is just underlining that point that the most accomplished human input technology we have is basically still like keyboard. And that's pretty good. If we had a keyboard rate bandwidth thing in VR, that would actually be an improvement from where we are with current controllers, but the hands are so much more powerful and expressive than any mechanical device that we currently have that we use with technology.

So, what else can we do? Well, we can try to get information directly out of the brain, and that's what your brain-machine interface or a neural interfaces is all about. This is a scientist who's now with us at CTRL-labs, his name is Ali Gersho. He had done a lot of research on EG before electroencephalograph that's basically trying to detect neural activity in the motor court, in the Cortex, in the brain, through the skull. So yeah, you can try to do it up at the brain. It's extremely hard for a number of reasons. Either you have to implant electrodes into the brain, which is probably not going to be a mass consumer situation anytime soon, or you are at the skull and the analogy that we talk about is, that's the equivalent of holding a microphone outside of Giant stadium. Sorry, whatever. Yeah, sure I can say Giant stadium in here. And trying to hear a conversation between two people in row F of section C. There's just so much going on, and you have such a weak signal. But if you do it out here at the motor periphery, it's a whole different story. So there's a number of reasons- I'll get into the neurophysiology in a second.

But suffice it to say it's a much easier problem out here. And so, the basic technology that we use as far as the sensors go, it's a pretty old technology going back into the 19th century when scientists first started sticking needles into animals and discovered, "Oh my God, it's actually electricity. Nerves are electricity." And so, electromyography, it's the electrodes that you placing on the skin. And so as far as using this for as an interface device, Thalmic Labs is a company that had a product kick-started originally on the market five or six years ago. And so you can do some interesting stuff with it. We've taken that way beyond the capabilities of that device in terms of bandwidth and sensitivity to the point where you can see individual activity on single motor units, and that's a game changer in terms of what you can do with it.

So, as far as the physiology, there's just a couple interesting things to point out which help explain why it's so much more practical to do this at the motor neuron in the periphery, rather than try to do it at the head. The first and most importantly is that, they are just far fewer neurons. So there's billions of neurons in the head and down here, there's order of 10,000, and that's because you're getting the output port of the brain. The motor cortex is the organ of the brain that has specifically evolved for the purpose of controlling muscles. So if you think about what is the brain for from an evolutionary perspective, the only useful thing it does is control muscles to make the animal that is attached to do things in the world and survive.

And so, the motor cortex is where all the background processes of the brain and subconscious intention and all that stuff is funneled down and the output says, "Hey, I really want to do something in the world." And if you think about it, control, like when we talk about it from a HCI perspective, that is the process of taking intent in the mind; a desire to take an action and turning that into action in the world. That's what control is. So, if you're interested in control, then getting the output of the motor cortex is all you need. And in fact, letting the brain do all the hard work of filtering all the messy, noisy stuff of the brain and just detecting it down here is by far the way to go.

Second very important thing for us is that the way that the nerves are wired to muscle gives us an extreme advantage just electrically; the muscle acts as an amplifier. And so the strength of the signal electrically that travels down the nerve is like on the order of microvolts. Once it hits the nerves and due to the way it's wired, a single nerve comes in and wires a bunch of muscle fibers, and each of those muscle fibers only gets input from one nerve, so that whole thing, the single nerve, and all the fibers that it's wired to is called the Motor Unit, and all those fibers to exit act as an electrical amplifier. And it's a very faithful amplifier. So the exact pulse train of activity coming out of the brain goes very faithfully down, wander from the motor cortex down to the spinal cord. And then the lower motor unit goes from the spinal cord out to the muscle and that hits the muscle. And then we see an exact replica of that just amplified up several orders of magnitude, to level that you can detect it at the skin.

We're going to switch this. So the point is that once you can capture that signal here at the arm, then you have the ability to decode it, and that's where your machine learning, hand wave comes in. We know signal processing and machine learning; we figure out the signal that's traveling down and parse it out from multiple signals in an array around the arm, and then you can turn it into control and you can control all the devices in your life. So the vision for this as a consumer device is, there's one device that you wear all day long and it's what you use on your laptop and your phone or whatever the mobile thing is that you have, and turn the lights on in your house and the radio in your car and other devices; you just have one interface.

So what can we do with it? So here is a video showing that we can decode enough of a signal to recreate the entire state of the hand in virtual reality. And so, this has obvious applications just straight up in VR as a way to get a presence in VR and not have these sticks where your hands should be, but really have your hands back in VR. And I think even apart from all the utility of using your hands as controllers, just having your hands and the ability to see them in VR is actually an amazing experience just as far as this sense of immersion goes. It's also showing here that we can detect forces. So he's squeezing his hand harder or softer as his fist, and because what we get as muscle activity, then the forces of how tightly or gently you're squeezing is something that we can detect. And it even works in VR. I'll come to some more use cases and some more videos in a second, as I get into the design issues.

Designing for Neural Interfaces

So, once we start thinking about this as a major paradigm shift for human-computer interaction, we are faced with a series of new challenges from a design perspective, that really emerge as critical to making this transition smooth and making experiences that feel really great. And once you've made this transition and you no longer have a physical device that you're holding as the controller, then the question becomes really about what are the constraints that give shape to the design problem? So if you have a mouse and it has to be a certain size and weight and fit the hand, and the keyboards have to be laid out in a certain way and have to cost a certain amount. And touch screens have only a certain amount of real estate and they can only do some things. Then, you know, the design problem is fairly scoped.

If the question is, you know, what do you want to do with your hands to control software? That's a much bigger design question and where do you even start? So, I think from one perspective, what we're trying to do on the design team and the interactions team at CTRL-labs, is think about these problems and think about, A, what are the essential qualities of what's going to make a great experience, and B, what are the constraints that are probably basically about the human limits now, rather than the limits of the technology, or at least where the limits of the technology and the human combine.

Neural Interaction Design Challenges

So, maybe just to talk about some specific very concrete problems that we're thinking about when designing is, obviously for VR and AR, navigating in three and six degrees of freedom. What do you want to be doing to fly around in VR? How do you want to use your hands to move an object? As far as reliability goes, one nice thing about a mouse and a keyboard is that you can leave it there and walk away from it when you're not using it. You can't take your hands off when you're done using that app. So we need to have an extremely reliable, on-off activation wake word system where we really know that you're intending to use this thing or not.

Ergonomics & Comfort

So I'll come into some of those other issues in a second. Ergonomics is huge. I think with this technology have an enormous opportunity and also responsibility. The opportunity is to solve a lot of the ergonomics problems with today's systems. Raise your hand if you use a custom keyboard that has at least some and you've chosen it for some ergonomics reasons. Okay. Less than I thought. I thought everyone here would have a crazy wonky keyboard system with the split thing. Or maybe I'll say this, raise your hand if you are limited in how much and how long you can work by the ergonomics issues with your setup? Okay. You should get better keyboards. So, you know, this is a real problem for people. And so now we have also the responsibility to make interactions that actually are safe and feel good and comfortable.

And so, here's just a few principles that we think a lot about. Smaller is better. I mean, from one perspective of ergonomics: if the point of ergonomics is to maximize control while minimizing effort, then the ideal system is one where there is no effort, or no motion. And I'll show a video in a second. And Courtney mentioned in the intro, we can do control where you're not moving at all. So, maybe that's the ideal ergonomic system. When we think about this, there are lots of details of the way the hand is built and anatomy, when certain kinds of motions are awkward and other ones feel great. And how do you build principles informed by the anatomy into the control schemes?

I think the key of ergonomics is that variation is essential. If you're just doing the same thing over and over again, you will get into problems even if it's something that feels really good in the beginning. And so, that's an advantage that we have, where we're not constrained by a single physical layout of a keyboard or something like that, where you could be continually varying the way that you're using a system.

And then like I said before, we can detect force. The force contraction, force for muscles. And it's an interaction modality that feels really good because you're going to have to push against your hand, and you've got some internal feedback from your own skeleton and the muscles. But we have to be aware to not overdo that because it could become problematic.

So here's the video of Patrick, our other co-founder who's playing asteroids. The hand that's on screen, if you look really closely, you can maybe see little wiggles, some fingers. So this is two dimensions of continuous control, thrusts, and rotation. Thrust, you got two buttons, fire and shield, with one hand without moving.

Hand-Object Interaction in XR

Here's our QCon mascot, thought it was appropriate. So, back to the issue of spatial computing and the "Minority Report" paradigm. So, I think the way that I imagined doing this with a neural interface is a lot different than reaching out. If your main sensing system to be used as input is basically a sixth off controller and detecting where in space the person's hand is, then, of course, you're going to wind up with Minority Report. But if your main input is- if you just think about the arm, as a pipe of information from the brain to the world, and we're just tapping that and getting information out that flows out of the brain that would've controlled your hand, then from one perspective, your hand doesn't matter. The point that matters is you have some intent and you're sending some information from your nerves, and that really changes the way that you think about building interaction. So, I think about this much more like what it would be like to use the force, where you just have some intent that you're sending out and you have an object and you want to move it and you're doing little things- you know, little muscle twinges and sending information out through the device that can control objects.

Novel Interactions

So, I think it's pretty clear if you look at this sort of history of computing that the software...all the software we've built and the experiences we have with technology co-evolve with the controllers that we have at the time that they're built. So, maybe it's most easy to see in game design when there was no joysticks and single buttons, then you've got platformer games and then it evolved to continuous analog sticks, and that opened up the possibility for your first-person shooters where you could look a different way than the way that you're running. And then touch pad games, mobile- if you look at the kind of games that people play on their phone screens- it's like you're swiping and tapping and doing little things on their screens.

And one of the great challenges for us is to think about what experiences are now possible with this technology that just didn't make any sense before with the existing controllers. So, we obviously have a large class of problems to solve where we're trying to replicate what existing controllers to do, and do it really well and do it with some marginal improvements of ergonomics and speed and portability. And you don't have camera occlusion problems. You could do it with your hands in your pockets, you can do it while you're snowboarding and you've got gloves on or you're underwater and you’re a scuba diver, and that's all great.

But then there's this other set of problems which are, what are the things that you can't even imagine doing with an Xbox controller or a keyboard or a mouse. And so here are just a few thoughts on that. You know, imagine trying to control a face, an avatar, an expressive face. Imagine trying to do that with an Xbox controller. Now, try to imagine doing it with your hands, and that's something people have been doing for thousands of years: puppets. Or maybe not a face, but a creature with many more degrees of freedom than you even have in your hand. So these are the experiences that I think are in some sense the most exciting for us to build, because they really could open up new spaces of games, and also just the ways of using technology that we don't currently have at all.

And here's just a small glimpse of an early prototype and in that direction. It's not going to play. Let me go back. So I've got a hexapod robot here that's mapped to my fingers. There's another one where I had a robot arm and I tried to give myself a back rub. It didn't work out as well.

Text Input

So, let's move on to talk about text input. I think it's enormous, the whole field in and of itself. And as far as most of the use cases of this technology that I'm most passionate about building, this is probably high up there. The other is a musical instrument. Sorry, this isn't going to loop. I'm just going to do it one more time. So this is a prototype that we built almost a year ago. I don't think any of us believe that this is the right way to do text input with this technology to just recapitulate 10-finger QWERTY typing. On the other hand, it's possible and it's kind of cool. It feels really neat. And then there's some interesting aspects of this. It adapts and learns to the particular way that you type, so you can maybe imagine starting with something that looked like a standard layout keyboard and just slowly over time, it co-adapts with the way that you type and maybe you can make it smaller and smaller, and then by the end, you're just doing little wiggles in your pocket and you're still typing. Of course, you still have to know how to touch type in order to use it. And there's some other practical problems. So, it takes two hands.

So can we do better than this? Can we do text input with one hand? Can we do text input in a way that people can learn in an hour or in 10 minutes, and not however many years it takes people to become good tech touch typists? Unanswered questions.

Props

An interesting way to use this technology is actually in combination with some physical object that you're holding in your hand, and that opens up some really interesting possibilities; thinking about holding some prop and it's just an inert object, but you can imbue it with some magical properties by nature of what we detect, the way you're manipulating or squeezing or holding it. So any piece of any stick can become a virtual stylist, maybe with different buttons that you can squeeze on it and change the color or shape of the ink. Imagine just having little trinkets in your pocket that are like your car keys or your phone or a marble, and then using that as a control device. These are some of the fun explorations that we're doing.

So I was talking about before about the constraints that give shape to the problem. I think one of the constraints that we've already bumped up to and in several instances, is actually just the user's own cognitive limits in terms of how many continuous independent dimensions that you can control. And so, there are a lot of unanswered questions here. We have a demo where you can get control of basically just a cursory, a two dimension cursor, and that's sort of how we train the asteroids demo. But you first learn to control the cursory, now you've got two degrees of freedom and then you can apply that to flying the ship. So after we built that we were like, okay, what can we build, can we control four dimensions, can we control six? So we thought, oh, the quickest way to test that was to put two cursors on the screen or three cursors on the screen; can you simultaneously control six degrees of freedom via three independent 2D cursors?

And at first, it was like, no, that's impossible. It's way too hard. Things are flying around, you can't control it. And we spent a few minutes with it and suddenly it's like, "Wait a second, I think I'm starting to get it." And you could play with it for a while and with practice, you could actually learn to do this, but it kind of made my brain melt. It was so cognitively demanding. You've got this visual attention problem; you're trying to independently track three different objects and remember which little thing I'm doing with which finger to control, that one versus that one. And so, it was immediately clear that the biggest problem with this particular demo was we designed it badly. We placed too much burden on cognitive limits, a cognitive load on the user. And with a different design, you probably could have done six degrees of freedom much more fluidly without the hand itself, as you know, 20 some odd degrees of freedom. You have no problem controlling that. So this is something humans are capable of doing. We just have to understand how to do this from a design perspective.

Another issue I want to bring out is customization and personalization. So, well, I like these show of hands, so raise your hand if you invert the y-axis when you're playing a first-person shooters or flying games? Or even in your Mac on the mouse- what do they call it, that natural scroll direction? And I'm like, "That's not natural. What are you talking about? That's backwards." So, you can amuse yourself tonight by reading the Reddit forums where people tear each other's throats off about the right way to flip this one bit in an easier interface.

And I think the point is that, again back to the question of constraints, if you don't have to have a single physical object that everybody holds as easier interface and if the mapping between what people are doing with their hands, and the way that maps to control is completely fluid, we have possibilities to let every person define their own mapping and customized thing, and to learn from the user the way that they want to control it, rather than have to learn how to control the way that we tell them to do it. So, understanding the right ways to do that and where it's appropriate…it's obviously- I don't think it's a panacea and it introduces a lot of its own problems. It's much easier if you can tell people, "Hey, in order to control this thing, you use your finger like this to click," great. That's easy. You can put that in documentation and make some animated gifs and people understand it. If it's like, "Choose how you want to make a click and then use this training software.", it's completely a different set of problems. But the possibilities are extremely intriguing, because I want this nub and back. This is my IBM ThinkPad, what do they call it? Track point. That's still my best pointer. And they don't make them anymore. And I want mine, actually I do, but not on a Mac.

So, I want to talk about the classic design principle of skeuomorphism. So if you take an object that people are familiar with from the real world and then try to turn that into this sort of- the analogy in software dials and buttons and sliders. And if you have your hands as the controllers, that's an extremely tempting thing to do and I think that we'll get a lot of mileage out this. But things are different when you don't actually have the objects, the physical thing in front of you. You don't necessarily have the same physical feedback from a switch doing its thing and the muscles don't do quite the same thing.

And so, there's another set of design problems around here; to understand how to take these ideas and these analogies from physical interaction and make them virtual and make them usable with just the neural interface. And this is about a way that you can imagine that going; where you might have something which started out with maybe being quite close to what the physical device did, but then over time, maybe through customization and adaptation or maybe just through refinements of the design, it becomes more and more abstracted and refined and boiled down to its essential nature and brought into its virtual existence.

So this is a little graphical story of the evolution of the company and through the eyes of hardware, looks like, the device on the upper left was the first prototype that we had that really worked, that got great signal down to the level of individual motor units. And it allowed us to start doing the machine learning research and the design work on top of it. But obviously, this was literally a sweatband with electrodes sewn in, not quite a user product. So, we're getting there, we're reducing the mechanicals and making this into something that looks more like a product, it’s now fully wireless, easy to wear. The version which we will release as a developer kit early next year, which is not shown here, is a close evolution from what's shown in the upper right, except it's all merged into a single piece, so that generation there where the electrode band is a separate device from what has the battery in the radio on it as a separate thing that's worn on the wrist, that's being all fully integrated into one device which will be worn on the forearm.

In another year after that, we'll get down to the wrist. It's something that looks like a watch. The big difference between being on the forearm and the wrist, and the reason why we started at the forearm, is really just about the mechanical issues of dealing- if you look at your wrist while you do this, you see that there's these tendons here that create quite a challenge for maintaining contact with the electrodes against the skin. So that's just a simple mechanical challenge that we have to work through. But the signal is there; in fact, it's actually quite better at the wrist because more muscles come up to the surface and are available when you're at the forearm. Some of them are deeper and are harder to get.

So that is it. Please come visit our website, sign up for the developer kit. We will start to be releasing to a small set of pre-release developers that we want to work closely with to understand how to use this technology and start to work out the kinks with our developer kit and then working up to a broader release next year. So follow us on Twitter controllabs.com. We're very excited to see what people build with this technology. We are highly aware that we're not going to come up with the best ideas and we just want to get this out into the world for people to start playing with. Thanks.

Female 1: So we have about five minutes. Some folks have some questions. Awesome. We got one right here. Yes, actually. Thank you.

Man 1: I was wondering if you have had any problems with overriding or overloading the muscle control system. Like I'm just imagining, I'm responding to a text and I crush my coffee cup and spill coffee over or something.

Berenzweig: Yeah. So, as I mentioned in the slide, you can't take your hand off. And so, the activation question is one place we can counter it. I think that the way I think about that is, maybe you've got a hierarchical system where the first top level of deciding what activities users engaged with that at that moment- am I holding a phone, am I in the certain hand pose that means I'm playing this game? And then within that, you would have a control scheme. And so, switching at the top level between applications would be responsible; one part of the system would be responsible for that and hopefully not getting crosstalk between them, but it's an interesting challenge. Obviously, you've got two hands so we can maybe make a lot of use of the fact that you can turn things on and off with one hand and then let your other hand do things. So...

Female 1: Any other questions?

Man 2: What are the biggest challenges you still have to solve before going to market with this?

Berenzweig: So the hardware is pretty much ready, I think that at least for developer kit version. I've come to believe over the past year that the design problems that we have are the hardest things we're working on. We don't have to solve them all ourselves and we won't. So, basically I think our responsibility though, before we put something out into the world is just having at least an opinion that we can point developers to about a good way that we have found to do some of the basics… 2D pointing and click and what's something that feels good for that, and navigating in VR and some of the other basic control schemes. So, that's one. And then there's some more work on the science side we have to do too. So, the model wet predicts what your hand is doing, generalizing that across the population as much as we can. So that works out-of-the-box for as many people as we can is ongoing work as well.

Man 3: You mentioned that one of your other areas of interest was designing a musical instrument with this. I was just curious to know, have you done any prototyping on that, if you've had a new thoughts about how it might work?

Berenzweig: Yes. We've done some prototyping. I can't say there's anything that I'm super proud of yet. It's interesting. It's one of the reasons I joined the company. Back in the 90s, there was a band called Sensor Band, these guys in Japan who were just instrumenting themselves with various kinds of sensors and using it to control music. And one of the sensors they were using was single EMG sensors. I just thought immediately that's a really great sensor for music because of the fact that you get your tension and muscle force, which seems to be very much related to the expressivity when you'd be performing music. So, yeah, I'm figuring out the right combination between... we've got some demos where you can sort of play individual notes reliably and that's great. And then we've got other demos where you can shape sound with your hand and use your hand more expressively and that's great. But I think to make a real powerful instrument combining those things in a way that you still have the reliability of one and the expressivity over the other is the real challenge.

Man 4: So, I know from some of the examples that are in the videos, that there appears to be some fraction of a second latency. And I was wondering what the sources of that latency are. Is that something that you think needs to be fixed by …

Berenzweig: A number of those videos are actually pretty old. If you come up after I can show you some more recent videos. We've eliminated almost all of visibility latency in some of those things.

Female 1: Any other questions?

Man 5: Have you guys experimented with- Mandarin is basically a gesturing language as opposed to English. So how does it look like with your platform?

Berenzweig: Haven't thought about that at all. That's really interesting. I have thought about ways in which you can imagine talking with your hands as another way of doing text input. And I should learn more about Mandarin. I was thinking actually about Korean because in the Korean script, the shape of the letters was designed to mimic the shape of the speech production system when you make those sounds. So I was wondering if people had maybe more of an intuition. They are, but yes. That's it. It's good lead. Thank you.

Female: Only got time for one more. Any other ones?

Man 6: This will either be a can of worms or nothing at all. Have you given any thought to any ethical or privacy concerns dealing with consuming and transmitting micro physiological signals?

Berenzweig: Sure. Absolutely. I mean, even from an extremely practical perspective, like with our keyboard demo, if you can decode what people are typing, then you have an enormous responsibility to protect that. So at one level, our privacy and security concerns are not any different than any computer system that somebody is using and putting private information to. We just have to be responsible about the way that we handle it and keep it secure.

Maybe some wireless issues, but again, the same as like a Bluetooth headset, let's say. I think that there may be some more subtle issues, maybe less about security, but more about just in general, like health and wellness, like what I was talking about with ergonomics before to think about... We're a purely passive device. we can't affect the body in any way. So, we're not injecting information or current into the body at all, so I don't think there's any safety concerns from that perspective. But yeah, it's obvious that a device that people will be wearing on their body, there's just the basic requirements of doing that safely.

See more presentations with transcripts

 

Recorded at:

Nov 28, 2018

BT