BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Amazon Announces Alexa Prize SocialBot Grand Challenge 4 Winners

Amazon Announces Alexa Prize SocialBot Grand Challenge 4 Winners

This item in japanese

Amazon recently announced the winners of the 4th Alexa Prize SocialBot Grand Challenge, a competition for university students to develop conversational AI. A team from Czech Technical University (CTU) won first prize, while teams from Stanford University and SUNY Buffalo took second and third.

The announcement appeared on the Amazon Science research website. Teams from colleges and universities competing in the challenge must build Alexa skills that interact with random Alexa users, with the goal of engaging the user for at least 20 minutes and achieving a rating of 4 out of 5 from judges. Although no team achieved those goals, this year's winners, Team Alquist from Czech Technical University, had an average interaction duration of just over 14 minutes and an average rating of 3.28. According to the team leader Jakub Konrád:

I am delighted and proud of our entire team for building a bot that managed to reach the finals for the fourth consecutive year. This year we strove to create a system capable of flexible conversation by synthesizing generative approaches with prepared scenarios that could adjust to users’ needs.

Amazon first announced the SocialBot Grand Challenge in 2016. Although no team has yet met the goal of a 20-minute conversation and a 4 out of 5 rating, overall performance in the annual competition has improved each year. Several universities have entered teams in multiple years; a CTU team has placed second or third in every annual competition, and this year's runner-up, Stanford, also earned second place last year. Earlier this year, Amazon launched an additional Alexa Prize competition, the Taskbot Challenge. In this challenge, university teams must build a multimodal (voice and vision) agent that helps users complete do-it-yourself and cooking tasks with multiple steps and decisions.

To build their chatbots, the teams use the Alexa Skills Kit and AWS Lambda to implement an Alexa Skill. Amazon also provides the teams with an additional Python toolkit called CoBot that includes natural language understanding, dialog management, and response generation services that use other AWS technologies for auto-scaling and high availability, allowing the teams to focus on the core conversational problem.

Once the bots are running and the competition begins, any Alexa user can connect to one of the bots, chosen randomly, by giving Alexa the prompt "Let's chat." After ending the conversation, users are prompted for feedback. The competition tracks several metrics about users' conversations with the bots; the most important is the 90th percentile duration. Other metrics include median duration and user rating on a scale of 1 to 5. Although this year's bots began the competition with better performance than last year's, quality began to decline during the Semifinal stage of the competition; Amazon attributes this to teams "experimenting with research ideas that are inherently unpredictable."

In a Reddit discussion on the Challenge, a member of the second-place Stanford team wrote:

I do want to push back on calling the regular Alexa Prize a "boring chatbot". Sure there has been a lot of progress with language models. But we found that in the applied setting, the open-domain chatbot problem still has many challenges - personality consistency, memory and state persistence, controllable generation, response latency etc. I'm probably biased, but I think these are interesting and important problems that are far from having great solutions yet and they prevent an open-domain chatbot from being an enjoyable conversational partner.

Technical papers submitted by each team describing their solutions are available on the Alexa Prize website.
 

Rate this Article

Adoption
Style

BT