About the station

When you build a model for natural language processing (NLP), such as a recurrent neural network, it helps a ton if you’re not starting from zero. In other words, if you can draw upon other datasets for building your understanding of word meanings, and then use your training dataset just for subject-specific refinements, you’ll get farther than just using your training dataset for everything. This idea of starting with some pre-trained resources has an analogue in computer vision, where initializations from ImageNet used for the first few layers of a CNN have become the new standard. There’s a similar progression under way in NLP, where simple(r) embeddings like word2vec are giving way to more advanced pre-processing methods that aim to capture more sophisticated understanding of word meanings, contexts, language structure, and more. Relevant links: https://thegradient.pub/nlp-imagenet/

Homepage

http://lineardigressions.com/

Give them some love

Latest bursts

So long, and thanks for all the fish

So long, and thanks for all the fish

So long, and thanks for all the fish

Aired Last week

A Reality Check on AI-Driven Medical Assistants

A Reality Check on AI-Driven Medical Assistants

A Reality Check on AI-Driven Medical Assistants

Aired 2 weeks ago

A Data Science Take on Open Policing Data

A Data Science Take on Open Policing Data

A Data Science Take on Open Policing Data

Aired 3 weeks ago

Procella: YouTube's super-system for analytics data storage

Procella: YouTube's super-system for analytics data storage

Procella: YouTube's super-system for analytics data storage

Aired Last month