Search

Fadi Biadsy Phones & Addresses

  • 68 E Shore Trl, Branchville, NJ 07826
  • Sandyston, NJ
  • Sanibel, FL
  • New York, NY
  • 343 Gold St, Brooklyn, NY 11201 (718) 855-1524
  • Sioux Falls, SD

Publications

Us Patents

Log Mining To Modify Grammar-Based Text Processing

View page
US Patent:
8473300, Jun 25, 2013
Filed:
Oct 8, 2012
Appl. No.:
13/646842
Inventors:
Pedro J. Moreno Mengibar - Jersey City NJ, US
Martin Jansche - New York NY, US
Fadi Biadsy - Brooklyn NY, US
Assignee:
Google Inc. - Mountain View CA
International Classification:
G10L 15/22
G10L 15/06
G10L 15/00
US Classification:
704275, 704244, 704257
Abstract:
Methods and systems for log mining for grammar-based text processing are provided. A method may comprise receiving, from a device, an activity log. The activity log may comprise one or more of an input instruction, a determined function based at least in part on a match of the input instruction to a grammar-based textual pattern including associations of a given function based on one or more grammars, and a response determination based on an acknowledgement of the determined function. The method may also comprise comparing at least a portion of the activity log with stored activity logs in order to determine a correlation between the activity log and the stored activity logs. The method may also comprise modifying the grammar-based textual pattern based on the determined correlation and providing information indicative of the modification to the device so as to update the grammar-based textual pattern.

Mapping An Audio Utterance To An Action Using A Classifier

View page
US Patent:
8484025, Jul 9, 2013
Filed:
Oct 4, 2012
Appl. No.:
13/644986
Inventors:
Pedro J. Moreno Mengibar - Jersey City NJ, US
Martin Jansche - New York NY, US
Fadi Biadsy - New York NY, US
Assignee:
Google Inc. - Mountain View CA
International Classification:
G10L 15/28
US Classification:
704255, 704257, 7042562, 704270, 704 9, 704 10
Abstract:
Disclosed embodiments relate to mapping an utterance to an action using a classifier. One illustrative computing device includes a user interface having an input component. The computing device further includes a processor and a computer-readable storage medium, having stored thereon program instructions that, upon execution by the processor, cause the computing device to perform a set of operations including: receiving an audio utterance via the input component; determining a text string based on the utterance; determining a string-feature vector based on the text string; selecting a target classifier from a set of classifiers, wherein the target classifier is selected based on a determination that a string-feature criteria of the target classifier corresponds to at least one string-feature of the string-feature vector; and initiating a target action that corresponds to the target classifier.

Dialect-Specific Acoustic Language Modeling And Speech Recognition

View page
US Patent:
8583432, Nov 12, 2013
Filed:
Jul 25, 2012
Appl. No.:
13/557983
Inventors:
Fadi Biadsy - New York NY, US
Lidia Mangu - New York NY, US
Hagen Soltau - Yorktown Heights NY, US
Assignee:
International Business Machines Corporation - Armonk NY
International Classification:
G10L 15/26
G10L 15/06
G10L 15/04
G06F 17/27
G06F 17/21
US Classification:
704235, 704243, 704245, 704254, 704 9, 704 10
Abstract:
Methods and systems for automatic speech recognition and methods and systems for training acoustic language models are disclosed. One system for automatic speech recognition includes a dialect recognition unit and a controller. The dialect recognition unit is configured to analyze acoustic input data to identify portions of the acoustic input data that conform to a general language and to identify portions of the acoustic input data that conform to at least one dialect of the general language. In addition, the controller is configured to apply a general language model and at least one dialect language model to the input data to perform speech recognition by dynamically selecting between the models in accordance with each of the identified portions.

Sampling Training Data For An Automatic Speech Recognition System Based On A Benchmark Classification Distribution

View page
US Patent:
8374865, Feb 12, 2013
Filed:
Apr 26, 2012
Appl. No.:
13/456671
Inventors:
Fadi Biadsy - New York NY, US
Pedro J. Moreno Mengibar - Jersey City NJ, US
Kaisuke Nakajima - Mountain View CA, US
Daniel Martin Bikel - New York NY, US
Assignee:
Google Inc. - Mountain View CA
International Classification:
G10L 15/06
G10L 15/08
US Classification:
704243, 7042562, 7042563
Abstract:
A set of benchmark text strings may be classified to provide a set of benchmark classifications. The benchmark text strings in the set may correspond to a benchmark corpus of benchmark utterances in a particular language. A benchmark classification distribution of the set of benchmark classifications may be determined. A respective classification for each text string in a corpus of text strings may also be determined. Text strings from the corpus of text strings may be sampled to form a training corpus of training text strings such that the classifications of the training text strings have a training text string classification distribution that is based on the benchmark classification distribution. The training corpus of training text strings may be used to train an automatic speech recognition (ASR) system.

Conformer-Based Speech Conversion Model

View page
US Patent:
20220310056, Sep 29, 2022
Filed:
Mar 16, 2022
Appl. No.:
17/655030
Inventors:
- Mountain View CA, US
Zhehuai Chen - Jersey City NJ, US
Fadi Biadsy - Mountain View CA, US
Pedro J. Moreno Mengibar - Jersey City NJ, US
Assignee:
Google LLC - Mountain View CA
International Classification:
G10L 13/027
G10L 25/18
G10L 15/22
G10L 15/16
G10L 13/047
Abstract:
A method for speech conversion includes receiving, as input to an encoder of a speech conversion model, an input spectrogram corresponding to an utterance, the encoder including a stack of self-attention blocks. The method further includes generating, as output from the encoder, an encoded spectrogram and receiving, as input to a spectrogram decoder of the speech conversion model, the encoded spectrogram generated as output from the encoder. The method further includes generating, as output from the spectrogram decoder, an output spectrogram corresponding to a synthesized speech representation of the utterance.

Direct Speech-To-Speech Translation Via Machine Learning

View page
US Patent:
20210209315, Jul 8, 2021
Filed:
Mar 7, 2020
Appl. No.:
17/056554
Inventors:
- Mountain View CA, US
Zhifeng Chen - Sunnyvale CA, US
Yonghui Wu - Palo Alto CA, US
Melvin Johnson - Sunnyvale CA, US
Fadi Biadsy - Sandyston NJ, US
Ron Weiss - New York NY, US
Wolfgang Macherey - Sunnyvale CA, US
International Classification:
G06F 40/47
G06F 40/58
Abstract:
The present disclosure provides systems and methods that train and use machine-learned models such as, for example, sequence-to-sequence models, to perform direct and text-free speech-to-speech translation. In particular, aspects of the present disclosure provide an attention-based sequence-to-sequence neural network which can directly translate speech from one language into speech in another language, without relying on an intermediate text representation.

Speech Recognition Using Log-Linear Model

View page
US Patent:
20160275946, Sep 22, 2016
Filed:
May 11, 2015
Appl. No.:
14/708465
Inventors:
- Mountain View CA, US
Fadi Biadsy - New York NY, US
International Classification:
G10L 15/197
G10L 15/26
Abstract:
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, relating to generating log-linear models. In some implementations, n-gram parameter values derived from an n-gram language model are obtained. N-gram features for a log-linear language model are determined based on the n-grams corresponding to the obtained n-gram parameter values. A weight for each of the determined n-gram features is determined, where the weight is determined based on (i) an n-gram parameter value that is derived from the n-gram language model and that corresponds to a particular n-gram, and (ii) an n-gram parameter value that is derived from the n-gram language model and that corresponds to an n-gram that is a sub-sequence within the particular n-gram. A log-linear language model having the determined n-gram features is generated, where the determined n-gram features in the log-linear language model have weights that are initialized based on the determined weights.

Dialect-Specific Acoustic Language Modeling And Speech Recognition

View page
US Patent:
20150287405, Oct 8, 2015
Filed:
Jul 18, 2012
Appl. No.:
13/552401
Inventors:
FADI BIADSY - NEW YORK NY, US
LIDIA MANGU - NEW YORK NY, US
HAGEN SOLTAU - YORKTOWN HEIGHTS NY, US
Assignee:
INTERNATIONAL BUSINESS MACHINES CORPORATION - ARMONK NY
International Classification:
G10L 15/18
G10L 15/06
Abstract:
Methods and systems for automatic speech recognition and methods and systems for training acoustic language models are disclosed. In accordance with one automatic speech recognition method, an acoustic input data set is analyzed to identify portions of the input data set that conform to a general language and to identify portions of the input data set that conform to at least one dialect of the general language. In addition, a general language model and at least one dialect language model is applied to the input data set to perform speech recognition by dynamically selecting between the models in accordance with each of the identified portions. Further, speech recognition results obtained in accordance with the application of the models is output.
Fadi Biadsy from Sandyston, NJ, age ~46 Get Report