Search

Nicholas Apostoloff Phones & Addresses

  • 1499 Hugo Ln, San Jose, CA 95118
  • San Rafael, CA
  • Lincoln Blvd, San Francisco, CA 94122

Work

Company: Digital domain Dec 2010 Position: Principal engineer

Education

Degree: DPhil. School / High School: University of Oxford 2002 to 2007 Specialities: Computer Vision

Skills

Computer Vision • C++ • Python • Maya • Machine Learning • Computer Graphics • Matlab • Nuke • Facial Animation • Visual Effects • Motion Capture • Subversion • Image Segmentation • Object Recognition • Optimization • Opencv • Object Tracking • Facs

Industries

Research

Resumes

Resumes

Nicholas Apostoloff Photo 1

Machine Learning Manager

View page
Location:
575 north Pastoria Ave, Sunnyvale, CA 94085
Industry:
Research
Work:
Digital Domain since Dec 2010
Principal Engineer

ImageMovers Digital Jan 2008 - Dec 2010
Senior Research & Development Engineer / Technical Development Lead

University of Oxford Sep 2006 - Oct 2007
Research Assistant

One Of Us 2006 - 2006
Shake development consultant

Microsoft Research Cambridge 2005 - 2006
Intern
Education:
University of Oxford 2002 - 2007
DPhil., Computer Vision
The Australian National University 2001 - 2002
MPhil., Computer Vision
The Australian National University 1996 - 2000
BEng. with Honours, Systems Engineering
The Australian National University 1996 - 1998
BInfTech, Information Technology
Phillip College 1994 - 1995
Skills:
Computer Vision
C++
Python
Maya
Machine Learning
Computer Graphics
Matlab
Nuke
Facial Animation
Visual Effects
Motion Capture
Subversion
Image Segmentation
Object Recognition
Optimization
Opencv
Object Tracking
Facs

Publications

Us Patents

System And Method Of Virtual Plant Field Modelling

View page
US Patent:
20210327133, Oct 21, 2021
Filed:
May 21, 2021
Appl. No.:
17/327516
Inventors:
- Sunnyvale CA, US
Nicholas Apostoloff - San Jose CA, US
International Classification:
G06T 17/05
G06T 15/08
G06T 17/20
G06T 7/33
G06T 7/73
G06T 7/11
Abstract:
A technique for generating virtual models of plants in a field is described. Generally, this includes recording images of plants in-situ; generating point clouds from the images; generating skeleton segments from the point cloud; classifying a subset of skeleton segments as unique plant features using the images; and growing plant skeletons from skeleton segments classified as unique plant feature. The technique may be used to generate a virtual model of a single, real plant, a portion of a real plant field, and/or the entirety of the real plant field. The virtual model can be analyzed to determine or estimate a variety of individual plant or plant population parameters, which in turn can be used to identify potential treatments or thinning practices, or predict future values for yield, plant uniformity, or any other parameter can be determined from the projected results based on the virtual model.

Intelligent Software Agent

View page
US Patent:
20210166691, Jun 3, 2021
Filed:
Dec 23, 2020
Appl. No.:
17/132439
Inventors:
- Cupertino CA, US
Abhishek Sharma - Sunnyvale CA, US
Jerremy Holland - Sunnyvale CA, US
Nicholas E. Apostoloff - San Jose CA, US
Russell Y. Webb - San Jose CA, US
Tomas J. Pfister - Foster City CA, US
International Classification:
G10L 15/22
G06K 9/00
G10L 15/18
G10L 15/26
Abstract:
Modifying operation of an intelligent agent in response to facial expressions and/or emotions.

Semantic Coherence Analysis Of Deep Neural Networks

View page
US Patent:
20210117778, Apr 22, 2021
Filed:
Feb 17, 2020
Appl. No.:
16/792835
Inventors:
- Cupertino CA, US
Xavier Suau Cuadros - Barcelona, ES
Luca Zappella - Sunnyvale CA, US
Nicholas E. Apostoloff - San Jose CA, US
Assignee:
Apple Inc. - Cupertino CA
International Classification:
G06N 3/08
G06N 3/04
G06N 20/10
Abstract:
Methods and apparatus are disclosed for interpreting a deep neural network (DNN) using a Semantic Coherence Analysis (SCA)-based interpretation technique. In embodiments, a multi-layered DNN that was trained for one task is analyzed using the SCA technique to select one layer in the DNN that produces salient features for another task. In embodiments, the DNN layers are tested with test samples labeled with a set of concept labels. The output features of a DNN layer are gathered and analyzed according to the concepts. In embodiments, the output is scored with a semantic coherence score, which indicates how well the layer separates the concepts, and one layer is selected from the DNN based on its semantic coherence score. In some embodiments, a support vector machine (SVM) or additional neural network may be added to the selected layer and trained to generate classification results based on the outputs of the selected layer.

Interpretable Neural Networks For Cuffless Blood Pressure Estimation

View page
US Patent:
20210117782, Apr 22, 2021
Filed:
Jul 31, 2020
Appl. No.:
16/945695
Inventors:
- Cupertino CA, US
Nicholas E. APOSTOLOFF - San Jose CA, US
Amruta PAI - Houston TX, US
International Classification:
G06N 3/08
G06N 3/04
G16H 40/60
Abstract:
In some examples, an individually-pruned neural network can estimate blood pressure from a seismocardiogram (SMG). In some examples, a baseline model can be constructed by training the model with SMG data and blood pressure measurement from a plurality of subjects. One or more filters (e.g., the filters in the top layer of the network) can be ranked by separability, which can be used to prune the model for each unseen user that uses the model thereafter, for example. In some examples, individuals can use individually-pruned models to calculate blood pressure using SMG data without corresponding blood pressure measurements.

Face Image Generation With Pose And Expression Control

View page
US Patent:
20210097730, Apr 1, 2021
Filed:
Aug 3, 2020
Appl. No.:
16/983561
Inventors:
- Cupertino CA, US
Nataniel Ruiz Gutierrez - Boston MA, US
Nicholas E. Apostoloff - San Jose CA, US
International Classification:
G06T 11/00
G06K 9/00
G06N 20/00
Abstract:
One aspect of the disclosure is a non-transitory computer-readable storage medium including program instructions. Operations performed by execution of the program instructions include obtaining an input image that depicts a face of a subject, having an initial facial expression and an initial pose, determining a reference shape description based on the input image, determining a target shape description based on the reference shape description, a facial expression difference, and a pose difference, generating a rendered target shape image using the target shape description, and generating an output image based on the input image and the rendered target shape using an image generator, wherein the output image is a simulated image of the subject of the input image that has a final expression that is based on the initial facial expression and the facial expression difference, and a final pose that is based on the initial pose and the pose difference.

Multimodal Approach For Avatar Animation

View page
US Patent:
20210090314, Mar 25, 2021
Filed:
Dec 20, 2019
Appl. No.:
16/723866
Inventors:
- Cupertino CA, US
Nicholas APOSTOLOFF - San Jose CA, US
Justin BINDER - Oakland CA, US
Paul Richard DIXON - Zurich, CH
Sachin KAJAREKAR - Sunnyvale CA, US
Reinhard KNOTHE - Basel, CH
Sebastian MARTIN - Schwerzenbach, CH
Barry-John THEOBALD - Sunnyvale CA, US
Thibaut WEISE - Munich, DE
International Classification:
G06T 13/40
G06T 13/20
G06F 3/16
G06K 9/00
G06N 3/08
G06N 3/04
Abstract:
Systems and methods for animating an avatar are provided. An example method of animating an avatar includes at an electronic device having one or more processors and memory, receiving an audio input, receiving a video input including at least a portion of a user's face, wherein the video input is separate from the audio input, determining one or more movements of the user's face based on the received audio input and received video input, and generating, using a neural network separately trained with a set of audio training data and a set of video training data, a set of characteristics for controlling an avatar representing the one or more movements of the user's face.

System And Method Of Virtual Plant Field Modelling

View page
US Patent:
20200364932, Nov 19, 2020
Filed:
Aug 3, 2020
Appl. No.:
16/984026
Inventors:
- Sunnyvale CA, US
Nicholas Apostoloff - San Jose CA, US
International Classification:
G06T 17/05
G06T 15/08
G06T 17/20
G06T 7/33
G06T 7/73
G06T 7/11
Abstract:
A technique for generating virtual models of plants in a field is described. Generally, this includes recording images of plants in-situ; generating point clouds from the images; generating skeleton segments from the point cloud; classifying a subset of skeleton segments as unique plant features using the images; and growing plant skeletons from skeleton segments classified as unique plant feature. The technique may be used to generate a virtual model of a single, real plant, a portion of a real plant field, and/or the entirety of the real plant field. The virtual model can be analyzed to determine or estimate a variety of individual plant or plant population parameters, which in turn can be used to identify potential treatments or thinning practices, or predict future values for yield, plant uniformity, or any other parameter can be determined from the projected results based on the virtual model.

Generating Responses To User Interaction Data Based On User Interaction-Styles

View page
US Patent:
20200311347, Oct 1, 2020
Filed:
Feb 24, 2020
Appl. No.:
16/798858
Inventors:
- Cupertino CA, US
Nicholas Elia Apostoloff - San Jose CA, US
Garrett Laws Weinberg - Santa Cruz CA, US
Russell Y. Webb - San Jose CA, US
Katherine Elaine Metcalf - Mountain View CA, US
International Classification:
G06F 40/30
G06F 40/253
G06K 9/62
Abstract:
A method includes obtaining user input interaction data. The user input interaction data includes one or more user interaction input values respectively obtained from the corresponding one or more input devices. The user input interaction data includes a word combination. The method includes generating a user interaction-style indicator value corresponding to the word combination in the user input interaction data. The user interaction-style indicator value is a function of the word combination and a portion of the one or more user interaction input values. The method includes determining, using a semantic text analyzer, a semantic assessment of the word combination in the user input interaction data based on the user interaction-style indicator value and a natural language assessment of the word combination. The method includes generating a response to the user input interaction data according to the user interaction-style indicator value and the semantic assessment of the word combination.
Nicholas E Apostoloff from San Jose, CA, age ~48 Get Report