Abstract

We build word models for American Sign Language (ASL) that transfer between different signers and different aspects. This is advantageous because one could use large amounts of labelled avatar data in combination with a smaller amount of labelled human data to spot a large number of words in human data. Transfer learning is possible because we represent blocks of video with novel intermediate discriminative features based on splits of the data. By constructing the same splits in avatar and human data and clustering appropriately, our features are both discriminative and semantically similar: across signers similar features imply similar words. We demonstrate transfer learning in two scenarios: from avatar to a frontally viewed human signer and from an avatar to human signer in a 3/4 view.

Keywords

AvatarComputer scienceDiscriminative modelTransfer of learningWord (group theory)Artificial intelligenceCluster analysisSign (mathematics)Natural language processingTransfer (computing)Sign languageSpeech recognitionHuman–computer interactionLinguisticsMathematics

Affiliated Institutions

Related Publications

Publication Info

Year
2007
Type
article
Pages
1-8
Citations
124
Access
Closed

Social Impact

Altmetric

Social media, news, blog, policy document mentions

Citation Metrics

124
OpenAlex
2
Influential
80
CrossRef

Cite This

Ali Farhadi, David Forsyth, R. H. White (2007). Transfer Learning in Sign language. 2007 IEEE Conference on Computer Vision and Pattern Recognition , 1-8. https://doi.org/10.1109/cvpr.2007.383346

Identifiers

DOI
10.1109/cvpr.2007.383346

Data Quality

Data completeness: 86%