Search

Yaron Eshet Phones & Addresses

  • 7433 Estrella Cir, Boca Raton, FL 33433
  • Alpharetta, GA
  • Seattle, WA

Publications

Us Patents

Generating A Depth Map

View page
US Patent:
20130100256, Apr 25, 2013
Filed:
Oct 21, 2011
Appl. No.:
13/278184
Inventors:
Adam G. Kirk - Renton WA, US
Yaron Eshet - Seattle WA, US
Kestutis Patiejunas - Sammamish WA, US
Sing Bing Kang - Redmond WA, US
David Eraker - Seattle WA, US
Simon Winder - Seattle WA, US
Assignee:
Microsoft Corporation - Redmond WA
International Classification:
H04N 13/02
US Classification:
348 48, 348E13074
Abstract:
Methods and systems for generating a depth map are provided. The method includes projecting an infrared (IR) dot pattern onto a scene. The method also includes capturing stereo images from each of two or more synchronized IR cameras, detecting a number of dots within the stereo images, computing a number of feature descriptors for the dots in the stereo images, and computing a disparity map between the stereo images. The method further includes generating a depth map for the scene using the disparity map.

Multi-Input Free Viewpoint Video Processing Pipeline

View page
US Patent:
20130321396, Dec 5, 2013
Filed:
Aug 30, 2012
Appl. No.:
13/599170
Inventors:
Adam Kirk - Renton WA, US
Kanchan Mitra - Woodinville WA, US
Patrick Sweeney - Woodinville WA, US
Don Gillett - Bellevue WA, US
Neil Fishman - Redmond WA, US
Simon Winder - Seattle WA, US
Yaron Eshet - Seattle WA, US
David Harnett - Seattle WA, US
Amit Mital - Bellevue WA, US
David Eraker - Seattle WA, US
Assignee:
MICROSOFT CORPORATION - Redmond WA
International Classification:
G06T 15/00
US Classification:
345419
Abstract:
Free viewpoint video of a scene is generated and presented to a user. An arrangement of sensors generates streams of sensor data each of which represents the scene from a different geometric perspective. The sensor data streams are calibrated. A scene proxy is generated from the calibrated sensor data streams. The scene proxy geometrically describes the scene as a function of time and includes one or more types of geometric proxy data which is matched to a first set of current pipeline conditions in order to maximize the photo-realism of the free viewpoint video resulting from the scene proxy at each point in time. A current synthetic viewpoint of the scene is generated from the scene proxy. This viewpoint generation maximizes the photo-realism of the current synthetic viewpoint based upon a second set of current pipeline conditions. The current synthetic viewpoint is displayed.

Cloud Based Free Viewpoint Video Streaming

View page
US Patent:
20130321586, Dec 5, 2013
Filed:
Aug 17, 2012
Appl. No.:
13/588917
Inventors:
Adam Kirk - Renton WA, US
Patrick Sweeney - Woodinville WA, US
Don Gillett - Bellevue WA, US
Neil Fishman - Redmond WA, US
Kanchan Mitra - Woodinville WA, US
Amit Mital - Bellevue WA, US
David Harnett - Seattle WA, US
Yaron Eshet - Seattle WA, US
Simon Winder - Seattle WA, US
David Eraker - Seattle WA, US
Assignee:
MICROSOFT CORPORATION - Redmond WA
International Classification:
H04N 13/02
US Classification:
348 47, 348E13074
Abstract:
Cloud based FVV streaming technique embodiments presented herein generally employ a cloud based FVV pipeline to create, render and transmit FVV frames depicting a captured scene as would be viewed from a current synthetic viewpoint selected by an end user and received from a client computing device. The FVV frames use a similar level of bandwidth as a conventional streaming movie would consume. To change viewpoints, a new viewpoint is sent from the client to the cloud, and a new streaming movie is initiated from the new viewpoint. Frames associated with that viewpoint are created, rendered and transmitted to the client until a new viewpoint request is received.

Automated Camera Array Calibration

View page
US Patent:
20130321589, Dec 5, 2013
Filed:
Aug 3, 2012
Appl. No.:
13/566877
Inventors:
Adam G. Kirk - Renton WA, US
Yaron Eshet - Seattle WA, US
David Eraker - Seattle WA, US
Assignee:
MICROSOFT CORPORATION - Redmond WA
International Classification:
H04N 17/02
US Classification:
348 48, 348 47, 348E17004
Abstract:
The automated camera array calibration technique described herein pertains to a technique for automating camera array calibration. The technique can leverage corresponding depth and single or multi-spectral intensity data (e.g., RGB (Red Green Blue) data) captured by hybrid capture devices to automatically determine camera geometry. In one embodiment it does this by finding common features in the depth maps between two hybrid capture devices and derives a rough extrinsic calibration based on shared depth map features. It then uses the intensity (e.g., RGB) data corresponding to the depth maps and uses the features of the intensity (e.g., RGB) data to refine the rough extrinsic calibration.

Generating Free Viewpoint Video Using Stereo Imaging

View page
US Patent:
20130095920, Apr 18, 2013
Filed:
Oct 13, 2011
Appl. No.:
13/273213
Inventors:
Kestutis Patiejunas - Sammamish WA, US
Kanchan Mitra - Woodinville WA, US
Patrick Sweeney - Woodinville WA, US
Yaron Eshet - Seattle WA, US
Adam G. Kirk - Renton WA, US
Sing Bing Kang - Redmond WA, US
David Eraker - Seattle WA, US
David Harnett - Seattle WA, US
Amit Mital - Bellevue WA, US
Simon Winder - Seattle WA, US
Assignee:
Microsoft Corporation - Redmond WA
International Classification:
A63F 13/00
H04N 13/02
US Classification:
463 31, 348 48, 348E13074
Abstract:
Methods and systems for generating free viewpoint video using an active infrared (IR) stereo module are provided. The method includes computing a depth map for a scene using an active IR stereo module. The depth map may be computed by projecting an IR dot pattern onto the scene, capturing stereo images from each of two or more synchronized IR cameras, detecting dots within the stereo images, computing feature descriptors corresponding to the dots in the stereo images, computing a disparity map between the stereo images, and generating the depth map using the disparity map. The method also includes generating a point cloud for the scene using the depth map, generating a mesh of the point cloud, and generating a projective texture map for the scene from the mesh of the point cloud. The method further includes generating the video for the scene using the projective texture map.

Object Orientation Estimation

View page
US Patent:
20150348269, Dec 3, 2015
Filed:
May 27, 2014
Appl. No.:
14/288287
Inventors:
- Redmond WA, US
Yaron ESHET - Seattle WA, US
Geoffrey J. HULTEN - Lynnwood WA, US
Assignee:
MICROSOFT CORPORATION - Redmond WA
International Classification:
G06T 7/00
G06K 9/00
G06K 9/52
Abstract:
The description relates to estimating object orientation. One example includes determining a first estimate of object orientation using a first technique and image data. In this example, a second estimate of the object orientation can be determined using a second technique and the image data. The first estimate can be corrected with the second estimate to generate a corrected object orientation estimate which can be output.
Yaron Eshet from Boca Raton, FL, age ~47 Get Report