Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News Jibo Releases SDK Aiming to Bring Robotics into Homes

Jibo Releases SDK Aiming to Bring Robotics into Homes

This item in japanese

IndieGogo startup Jibo has announced an SDK for developing applications, a.k.a. skills, for its “social robot” for the home, which will target entertainment, education, and IoT integration.

Jibo is an attempt to create a robot that can interact with its environment by recognizing people voices and faces, speaking up and showing them visual content, and moving its parts, which include a “chest” and a “head”.

The Jibo SDK aims to allow developers to create skills using a JavaScript API that provides access to the more computationally-intensive parts of the Jibo platform, which are written in C/C++. Among the capabilities developers get access to through the Jibo SDK are:

  • Audio and speech technology, used to recognize speech and speak up in response.
  • Visual processing, which makes it possible to recognize faces and movement, as well as display animated visual content on its display.
  • Interaction and movement capabilities, aided by three servo motors. One of the aims of Jibo is to remove the complexities of robotics while making it possible to create rich and expressive movements.

Interestingly, Jibo has two cameras, but developers do not have direct access to them. Instead, they get a spatial representation of what Jibo “sees”. This is meant to make it impossible for anyone to use Jibo’s cameras to spy on people.

According to Jibo’s head of SDK development, Jonathan Ross, the decision to use JavaScript was motivated by its being one of the fastest-growing languages and the richness of its ecosystem, both in terms of available libraries and programming tools. Jibo SDK itself is built on Electron and includes an animation editor, a behavior editor, a speech editor, and a simulator.

In conversation with InfoQ, Justin Woo, Developer Evangelist, and Jonathan Ross, Head of SDK Development, explained that Jibo can connect to any IoT device that provides some kind of public API thus making it possible to use it as a “home commander”.

For the initial release of Jibo the robot, we are targeting technically savvy families - think younger GenX with kids at home, and singles - think Millennials. When we think about applications - what we call skills for Jibo - we think about the different roles Jibo will play as the new family companion, e.g. Jibo as an educator, Jibo as an entertainer, Jibo as the home commander. When you start thinking about Jibo as a someone and all he can do to be a part of his family, the individual skill ideas are endless.

Jibo SDK is based on behavior trees, which are particularly suited to model behavior and control flow of autonomous agents, and to coordinate concurrent actions and decision-making processes, according to Woo and Ross.

Woo and Ross additionally explained that Jibo uses two cloud based services: one for persistent data storage, so skills data can be securely backed up to the cloud to prevent data loss; the other for text independent audio speech recognition and natural language understanding. Jibo, however, does not use cloud based recognition all the time:

Jibo’s wake up phrase, “Hey, Jibo”, is processed on board, so he only streams to the cloud after hearing it. The second is persistent data storage. Each skill has a protected area in Jibo’s local memory it can read from and write to. Jibo securely backs up this data to the cloud so nothing is ever lost.

A number of tasks are all done locally on the robot, including: vision, perception, audio localization, voice ID, Face ID, animation/motor control, NLU, text dependent speech recognition, text to speech (Jibo’s voice), graphics, and sound.

While the Jibo SDK is already available, Jibo’s release is expected to take place in late 2016. Along with its SDK, Jibo has also launched a developer forum.

Rate this Article