Search

Maria Parada Phones & Addresses

  • 2924 Camby Rd, Antioch, CA 94509
  • Daly City, CA
  • San Francisco, CA

Resumes

Resumes

Maria Parada Photo 1

Ehs Consultant At Eorm

View page
Position:
EHS Consultant at EORM
Location:
San Francisco Bay Area
Industry:
Environmental Services
Work:
EORM
EHS Consultant
Education:
San Jose State University
MBA, HRM, ASC
Maria Parada Photo 2

Maria Parada

View page
Location:
United States

Business Records

Name / Title
Company / Classification
Phones & Addresses
Maria O. Parada
Director
EL BUEN PASTOR BAPTIST CHURCH-KATY

Publications

Us Patents

Determining Hotword Suitability

View page
US Patent:
20140012586, Jan 9, 2014
Filed:
Aug 6, 2012
Appl. No.:
13/567572
Inventors:
Andrew E. Rubin - Portola Valley CA, US
Johan Schalkwyk - Scarsdale NY, US
Maria Carolina Parada San Martin - Palo Alto CA, US
Assignee:
GOOGLE INC. - Mountain View CA
International Classification:
G10L 21/00
US Classification:
704273, 704276, 704E21001
Abstract:
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for determining hotword suitability. In one aspect, a method includes receiving speech data that encodes a candidate hotword spoken by a user, evaluating the speech data or a transcription of the candidate hotword, using one or more predetermined criteria, generating a hotword suitability score for the candidate hotword based on evaluating the speech data or a transcription of the candidate hotword, using one or more predetermined criteria, and providing a representation of the hotword suitability score for display to the user.

Unified Endpointer Using Multitask And Multidomain Learning

View page
US Patent:
20210142174, May 13, 2021
Filed:
Jan 20, 2021
Appl. No.:
17/152918
Inventors:
- Mountain View CA, US
Bo Li - Fremont CA, US
Gabor Simko - Santa Clara CA, US
Maria Corolina Parada San Martin - Boulder CO, US
Sean Matthew Shannon - Mountain View CA, US
Assignee:
Google LLC - Mountain View CA
International Classification:
G06N 3/08
G06N 3/04
G10L 15/16
G06N 20/20
G06K 9/62
G06N 5/04
Abstract:
A method for training an endpointer model includes short-form speech utterances and long-form speech utterances. The method also includes providing a short-form speech utterance as input to a shared neural network, the shared neural network configured to learn shared hidden representations suitable for both voice activity detection (VAD) and end-of-query (EOQ) detection. The method also includes generating, using a VAD classifier, a sequence of predicted VAD labels and determining a VAD loss by comparing the sequence of predicted VAD labels to a corresponding sequence of reference VAD labels. The method also includes, generating, using an EOQ classifier, a sequence of predicted EOQ labels and determining an EOQ loss by comparing the sequence of predicted EOQ labels to a corresponding sequence of reference EOQ labels. The method also includes training, using a cross-entropy criterion, the endpointer model based on the VAD loss and the EOQ loss.

End Of Query Detection

View page
US Patent:
20200168242, May 28, 2020
Filed:
Jan 31, 2020
Appl. No.:
16/778222
Inventors:
- Mountain View CA, US
Maria Carolina Parada San Martin - Palo Alto CA, US
Sean Matthew Shannon - Mountain View CA, US
Assignee:
Google LLC - Mountain View CA
International Classification:
G10L 25/78
G10L 15/18
G10L 15/065
G10L 15/187
G10L 15/22
Abstract:
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for detecting an end of a query are disclosed. In one aspect, a method includes the actions of receiving audio data that corresponds to an utterance spoken by a user. The actions further include applying, to the audio data, an end of query model. The actions further include determining the confidence score that reflects a likelihood that the utterance is a complete utterance. The actions further include comparing the confidence score that reflects the likelihood that the utterance is a complete utterance to a confidence score threshold. The actions further include determining whether the utterance is likely complete or likely incomplete. The actions further include providing, for output, an instruction to (i) maintain a microphone that is receiving the utterance in an active state or (ii) deactivate the microphone that is receiving the utterance.

Unified Endpointer Using Multitask And Multidomain Learning

View page
US Patent:
20200117996, Apr 16, 2020
Filed:
Dec 11, 2019
Appl. No.:
16/711172
Inventors:
- Mountain View CA, US
Bo Li - Mountain View CA, US
Gabor Simko - Santa Clara CA, US
Maria Carolina Parada San Martin - Boulder CO, US
Sean Matthew Shannon - Mountain View CA, US
Assignee:
Google LLC - Mountain View CA
International Classification:
G06N 3/08
G06N 3/04
G06N 5/04
G06N 20/20
G06K 9/62
G10L 15/16
Abstract:
A method for training an endpointer model includes short-form speech utterances and long-form speech utterances. The method also includes providing a short-form speech utterance as input to a shared neural network, the shared neural network configured to learn shared hidden representations suitable for both voice activity detection (VAD) and end-of-query (EOQ) detection. The method also includes generating, using a VAD classifier, a sequence of predicted VAD labels and determining a VAD loss by comparing the sequence of predicted VAD labels to a corresponding sequence of reference VAD labels. The method also includes, generating, using an EOQ classifier, a sequence of predicted EOQ labels and determining an EOQ loss by comparing the sequence of predicted EOQ labels to a corresponding sequence of reference EOQ labels. The method also includes training, using a cross-entropy criterion, the endpointer model based on the VAD loss and the EOQ loss.

Convolutional Neural Networks

View page
US Patent:
20200051551, Feb 13, 2020
Filed:
Oct 16, 2019
Appl. No.:
16/654041
Inventors:
- Mountain View CA, US
Maria Carolina Parada San Martin - Palo Alto CA, US
Assignee:
Google LLC - Mountain View CA
International Classification:
G10L 15/16
G06N 3/04
Abstract:
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for keyword spotting. One of the methods includes training, by a keyword detection system, a convolutional neural network for keyword detection by providing a two-dimensional set of input values to the convolutional neural network, the input values including a first dimension in time and a second dimension in frequency, and performing convolutional multiplication on the two-dimensional set of input values for a filter using a frequency stride greater than one to generate a feature map.

Convolutional Neural Networks

View page
US Patent:
20160283841, Sep 29, 2016
Filed:
Jul 22, 2015
Appl. No.:
14/805704
Inventors:
- Mountain View CA, US
Maria Carolina Parada San Martin - Palo Alto CA, US
International Classification:
G06N 3/08
G06N 3/04
Abstract:
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for keyword spotting. One of the methods includes training, by a keyword detection system, a convolutional neural network for keyword detection by providing a two-dimensional set of input values to the convolutional neural network, the input values including a first dimension in time and a second dimension in frequency, and performing convolutional multiplication on the two-dimensional set of input values for a filter using a frequency stride greater than one to generate a feature map.

Low-Rank Hidden Input Layer For Speech Recognition Neural Network

View page
US Patent:
20160092766, Mar 31, 2016
Filed:
Feb 9, 2015
Appl. No.:
14/616881
Inventors:
- Mountain View CA, US
Maria Carolina Parada San Martin - Palo Alto CA, US
International Classification:
G06N 3/08
G10L 25/30
G10L 15/06
G06N 3/04
G06N 7/00
Abstract:
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a deep neural network. One of the methods for training a deep neural network that includes a low rank hidden input layer and an adjoining hidden layer, the low rank hidden input layer including a first matrix A and a second matrix B with dimensions i×m and m×o, respectively, to identify a keyword includes receiving a feature vector including i values that represent features of an audio signal encoding an utterance, determining, using the low rank hidden input layer, an output vector including o values using the feature vector, determining, using the adjoining hidden layer, another vector using the output vector, determining a confidence score that indicates whether the utterance includes the keyword using the other vector, and adjusting weights for the low rank hidden input layer using the confidence score.

Keyword Detection Based On Acoustic Alignment

View page
US Patent:
20150279351, Oct 1, 2015
Filed:
Apr 11, 2013
Appl. No.:
13/861020
Inventors:
- Mountain View CA, US
Maria Carolina Parada San Martin - Palo Alto CA, US
Johan Schalkwyk - Scarsdale NY, US
International Classification:
G10L 15/02
Abstract:
Embodiments pertain to automatic speech recognition in mobile devices to establish the presence of a keyword. An audio waveform is received at a mobile device. Front-end feature extraction is performed on the audio waveform, followed by acoustic modeling, high level feature extraction, and output classification to detect the keyword. Acoustic modeling may use a neural network or Gaussian mixture modeling, and high level feature extraction may be done by aligning the results of the acoustic modeling with expected event vectors that correspond to a keyword.
Maria A Parada from Antioch, CA, age ~68 Get Report