BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Creating Robust Interpretable NLP Systems with Attention

Creating Robust Interpretable NLP Systems with Attention

Bookmarks
21:39

Summary

Alexander Wolf introduces Attention, an interpretable type of neural network layer that is loosely based on attention in human, explaining why and how it has been utilized to revolutionize NLP.

Bio

Alexander Wolf is a Data Scientist at Dataiku, working with clients around the world to organize their data infrastructures and deploy data-driven products into production. Prior to that, he worked on software and business development in the tech industry. He's passionate about the latest developments in Deep Learning/Tech and works at enriching Dataiku's NLP features.

About the conference

PAPIs is the conference made by and for ML practitioners. Anyone can submit talk proposals (blind-reviewed by our committee) or ask questions to take discussions further. Breaks between sessions are perfect to get to know fellow attendees, speakers, and top ML companies.

Recorded at:

Feb 24, 2019

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Spoiler!

    by Jay Vercellone,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    This dude just spoiled Westworld for me. Not cool.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT