Search

Taniya Mishra

from New York, NY
Age ~47

Taniya Mishra Phones & Addresses

  • 230 79Th St, New York, NY 10024 (212) 362-2139
  • 230 W 79Th St #Frnt 1, New York, NY 10024
  • 250 85Th St, New York, NY 10024 (212) 362-2139
  • 9 Claremont Ave, Maplewood, NJ 07040
  • Portland, OR
  • Winona, MN
  • Hillsboro, OR
  • 230 W 79Th St Frnt 1, New York, NY 10024 (212) 362-2139

Resumes

Resumes

Taniya Mishra Photo 1

Chief Executive Officer And Founder

View page
Location:
P/O Box 894511, Los Angeles, CA
Industry:
Computer Software
Work:
AT&T Labs Research
Senior member of Technical Staff
Education:
Mit Sloan School of Management 2020 - 2020
Oregon Health & Science University 2001 - 2008
Doctorates, Doctor of Philosophy, Computer Science, Philosophy
Saint Mary's University of Minnesota 1997 - 2001
Bachelors, Bachelor of Arts, Mathematics, Computer Science
Skills:
Machine Learning
Python
Speech Synthesis
Emotion Recognition
Natural Language Processing
Computer Science
Prosody
Perl
Statistical Modeling
Speech Analytics
Sentiment Analysis
Algorithms
Question Answering
Text Mining
Speech Technology
Speech Processing
Human Computer Interaction
Data Analysis
Speech
Programming
Voice User Interface Design
Team Building
Team Leadership
Languages:
English
Hindi
Bengali
Oriya
Taniya Mishra Photo 2

Chief Executive Officer And Founder

View page
Location:
New York, NY
Work:
Surestart
Chief Executive Officer and Founder
Taniya Mishra Photo 3

Director Of Ai Research

View page
Location:
New York, NY
Work:
Affectiva
Director of Ai Research
Taniya Mishra Photo 4

Taniya Mishra

View page

Publications

Us Patents

System And Method For Tightly Coupling Automatic Speech Recognition And Search

View page
US Patent:
20110144995, Jun 16, 2011
Filed:
Dec 15, 2009
Appl. No.:
12/638649
Inventors:
Srinivas BANGALORE - Morristown NJ, US
Taniya MISHRA - New York NY, US
Assignee:
AT&T Intellectual Property I, L.P. - Reno NV
International Classification:
G10L 15/00
G06F 17/30
US Classification:
704251, 707728, 704E15001, 707723, 707E17109
Abstract:
Disclosed herein are systems, methods, and computer-readable storage media for performing a search. A system configured to practice the method first receives from an automatic speech recognition (ASR) system a word lattice based on speech query and receives indexed documents from an information repository. The system composes, based on the word lattice and the indexed documents, at least one triple including a query word, selected indexed document, and weight. The system generates an N-best path through the word lattice based on the at least one triple and re-ranks ASR output based on the N-best path. The system aggregates each weight across the query words to generate N-best listings and returns search results to the speech query based on the re-ranked ASR output and the N-best listings. The lattice can be a confusion network, the arc density of which can be adjusted for a desired performance level.

System And Method For Using Prosody For Voice-Enabled Search

View page
US Patent:
20120072217, Mar 22, 2012
Filed:
Sep 17, 2010
Appl. No.:
12/884959
Inventors:
Srinivas BANGALORE - Morristown NJ, US
Junlan Feng - Basking Ridge NJ, US
Michael Johnston - New York NY, US
Taniya Mishra - New York NY, US
Assignee:
AT&T Intellectual Property I, L.P - Reno NV
International Classification:
G10L 15/06
US Classification:
704243, 704E15014
Abstract:
Disclosed herein are systems, methods, and non-transitory computer-readable storage media for approximating relevant responses to a user query with voice-enabled search. A system practicing the method receives a word lattice generated by an automatic speech recognizer based on a user speech and a prosodic analysis of the user speech, generates a reweighted word lattice based on the word lattice and the prosodic analysis, approximates based on the reweighted word lattice one or more relevant responses to the query, and presents to a user the responses to the query. The prosodic analysis examines metalinguistic information of the user speech and can identify the most salient subject matter of the speech, assess how confident a speaker is in the content of his or her speech, and identify the attitude, mood, emotion, sentiment, etc. of the speaker. Other information not described in the content of the speech can also be used.

System And Method For Enhancing Voice-Enabled Search Based On Automated Demographic Identification

View page
US Patent:
20120072219, Mar 22, 2012
Filed:
Sep 22, 2010
Appl. No.:
12/888012
Inventors:
Michael JOHNSTON - New York NY, US
Srinivas Bangalore - Morristown NJ, US
Junlan Feng - Basking Ridge NJ, US
Taniya Mishra - New York NY, US
Assignee:
AT & T Intellectual Property I, L.P. - Reno NV
International Classification:
G10L 15/04
US Classification:
704251, 704E15005
Abstract:
Disclosed herein are systems, methods, and non-transitory computer-readable storage media for approximating responses to a user speech query in voice-enabled search based on metadata that include demographic features of the speaker. A system practicing the method recognizes received speech from a speaker to generate recognized speech, identifies metadata about the speaker from the received speech, and feeds the recognized speech and the metadata to a question-answering engine. Identifying the metadata about the speaker is based on voice characteristics of the received speech. The demographic features can include age, gender, socio-economic group, nationality, and/or region. The metadata identified about the speaker from the received speech can be combined with or override self-reported speaker demographic information.

User Profile And Its Location In A Clustered Profile Landscape

View page
US Patent:
20120089605, Apr 12, 2012
Filed:
Oct 8, 2010
Appl. No.:
12/901075
Inventors:
Srinivas BANGALORE - Morristown NJ, US
Junlan FENG - Basking Ridge NJ, US
Michael James Robert JOHNSTON - New York NY, US
Taniya MISHRA - New York NY, US
Assignee:
AT&T INTELLECTUAL PROPERTY I, L.P. - Reno NV
International Classification:
G06F 17/30
US Classification:
707737, 707E17018, 707E17089
Abstract:
Delivering targeted content includes collecting, via at least one tangible processor, user activity data for users during a specified time period. questions asked by the users during the specified time period are extracted from the user activity data, via the at least one tangible processor, and stored in user profiles for the users. The user profiles are clustered, via the at least one tangible processor, based on the questions asked. Targeted content is delivered, via the at least one tangible processor, to a subset of the users based on the clustering.

Personal Customer Care Agent

View page
US Patent:
20120095861, Apr 19, 2012
Filed:
Oct 15, 2010
Appl. No.:
12/905172
Inventors:
Junlan FENG - Basking Ridge NJ, US
Srinivas BANGALORE - Morristown NJ, US
Michael James Robert JOHNSTON - New York NY, US
Taniya MISHRA - New York NY, US
Assignee:
AT&T INTELLECTUAL PROPERTY I, L.P. - Reno NV
International Classification:
G06Q 30/00
G06Q 10/00
G10L 15/26
G06F 15/173
US Classification:
705 267, 709224, 705347, 705303, 704235
Abstract:
Aggregating information includes configuring, by at least one processor, a user profile that indicates user preferences for aggregated information. The at least one processor monitors information sources including the World Wide Web, business websites of interest, and online social media, based on the user preferences. Data obtained from the information sources is presented, based on the monitoring, by the at least one processor, in accordance with a presentation format, as the aggregated information, based on the user preferences. The at least one processor triggers updating of the presented aggregated information based on a change to the data at least one of the information sources and a change to the user profile.

System And Method For Generating Challenge Utterances For Speaker Verification

View page
US Patent:
20120130714, May 24, 2012
Filed:
Nov 24, 2010
Appl. No.:
12/954094
Inventors:
Ilija Zeljkovic - Scotch Plains NJ, US
Taniya Mishra - New York NY, US
Amanda Stent - Chatham NJ, US
Ann K. Syrdal - Morristown NJ, US
Jay Wilpon - Warren NJ, US
Assignee:
AT&T Intellectual Property I, L.P. - Reno NV
International Classification:
G10L 17/00
G10L 15/26
US Classification:
704235, 704246, 704249, 704E17002, 704E15043
Abstract:
Disclosed herein are systems, methods, and non-transitory computer-readable storage media relating to speaker verification. In one aspect, a system receives a first user identity from a second user, and, based on the identity, accesses voice characteristics. The system randomly generates a challenge sentence according to a rule and/or grammar, based on the voice characteristics, and prompts the second user to speak the challenge sentence. The system verifies that the second user is the first user if the spoken challenge sentence matches the voice characteristics. In an enrollment aspect, the system constructs an enrollment phrase that covers a minimum threshold of unique speech sounds based on speaker-distinctive phonemes, phoneme clusters, and prosody. Then user utters the enrollment phrase and extracts voice characteristics for the user from the uttered enrollment phrase. The system generates a user profile, based on the voice characteristics, for generating random challenge sentences according to a grammar.

System And Method For Dynamic Facial Features For Speaker Recognition

View page
US Patent:
20120281885, Nov 8, 2012
Filed:
May 5, 2011
Appl. No.:
13/101704
Inventors:
Ann K. SYRDAL - Morristown NJ, US
Sumit Chopra - Jersey City NJ, US
Patrick Haffner - Atlantic Highlands NJ, US
Taniya Mishra - New York NY, US
Ilija Zeljkovic - Scotch Plains NJ, US
Eric Zavesky - Hoboken NJ, US
Assignee:
AT&T Intellectual Property I, L.P. - Atlanta GA
International Classification:
G06K 9/00
US Classification:
382116, 382118
Abstract:
Disclosed herein are systems, methods, and non-transitory computer-readable storage media for performing speaker verification. A system configured to practice the method receives a request to verify a speaker, generates a text challenge that is unique to the request, and, in response to the request, prompts the speaker to utter the text challenge. Then the system records a dynamic image feature of the speaker as the speaker utters the text challenge, and performs speaker verification based on the dynamic image feature and the text challenge. Recording the dynamic image feature of the speaker can include recording video of the speaker while speaking the text challenge. The dynamic feature can include a movement pattern of head, lips, mouth, eyes, and/or eyebrows of the speaker. The dynamic image feature can relate to phonetic content of the speaker speaking the challenge, speech prosody, and the speaker's facial expression responding to content of the challenge.

System And Method For Optimizing Speech Recognition And Natural Language Parameters With User Feedback

View page
US Patent:
20120290298, Nov 15, 2012
Filed:
May 9, 2011
Appl. No.:
13/103665
Inventors:
Andrej LJOLJE - Morris Plains NJ, US
Diamantino Antonio Caseiro - Philadelphia PA, US
Mazin Gilbert - Warren NJ, US
Vincent Goffin - Summit NJ, US
Taniya Mishra - New York NY, US
Assignee:
AT&T Intellectual Property I, L.P. - Atlanta GA
International Classification:
G10L 15/26
US Classification:
704235, 704E15043
Abstract:
Disclosed herein are systems, methods, and non-transitory computer-readable storage media for assigning saliency weights to words of an ASR model. The saliency values assigned to words within an ASR model are based on human perception judgments of previous transcripts. These saliency values are applied as weights to modify an ASR model such that the results of the weighted ASR model in converting a spoken document to a transcript provide a more accurate and useful transcription to the user.
Taniya Mishra from New York, NY, age ~47 Get Report