Search

Sangseok Park Phones & Addresses

  • Fairfax, VA
  • 3416 Mandalay Dr, Flower Mound, TX 75022
  • Holmdel, NJ
  • 607 Summit Ave, Arlington, TX 76013
  • 2611 Verandah Ln, Arlington, TX 76006 (817) 642-3379
  • Eatontown, NJ
  • Long Branch, NJ

Resumes

Resumes

Sangseok Park Photo 1

Patent Examiner

View page
Location:
Washington, DC
Industry:
Telecommunications
Work:
Uspto
Patent Examiner

Euclidiq, Llc Jul 2013 - Sep 2016
Senior Video Codec Engineer

Dialogic Jan 1, 2008 - Dec 31, 2013
Member of Technical Staff

Dialogic Nov 2012 - Jun 2013
Senior Software Engineer

Dialogic Oct 2010 - Oct 2012
Video Software Engineer
Education:
The University of Texas at Arlington 2004 - 2008
Doctorates, Doctor of Philosophy, Electrical Engineering
Hanyang University 1992 - 1996
Skills:
Digital Signal Processors
Streaming Media
Video Coding
Linux
H.264
Mpeg 4
Video Compression
Debugging
Signal Processing
Rtp
Vp8
Algorithms
Transcoding
Codecs
C
C++
Languages:
Korean
English
Certifications:
Machine Learning
Coursera
Sangseok Park Photo 2

Senior Video Codec Engineer At Euclid Discoveries, Llc

View page
Position:
Senior Video Codec Engineer at Euclid Discoveries, LLC
Location:
Dallas/Fort Worth Area
Industry:
Telecommunications
Work:
Dialogic - Parsippany, NJ since Nov 2012
Senior Software Engineer

Dialogic - Parsippany, NJ Oct 2010 - Oct 2012
Video Software Engineer

Dialogic Media Labs - Eatontown, NJ Nov 2008 - Sep 2010
Member of Technical Staff

Qualcomm Jan 2007 - May 2007
Intern

Agency for Defense Development in Korea Feb 1998 - Apr 2004
Researcher
Education:
The University of Texas at Arlington 2004 - 2008
PhD, Electrical Engineering
Hanyang University 1992 - 1996
Languages:
Korean
English

Business Records

Name / Title
Company / Classification
Phones & Addresses
Sangseok Park
Manager
PHASEL, LLC
2625 Old Denton Rd STE 456, Carrollton, TX 75007
3620 Dresage Ln, Flower Mound, TX 75022

Publications

Us Patents

Perceptual Processing Techniques For Video Transcoding

View page
US Patent:
20120257672, Oct 11, 2012
Filed:
Apr 5, 2011
Appl. No.:
13/079955
Inventors:
Kyeong H. Yang - Freehold NJ, US
Sangseok Park - Eatontown NJ, US
International Classification:
H04N 7/30
H04N 7/26
US Classification:
37524003, 375E07126, 375E07226
Abstract:
Systems and methods of video transcoding that employ perceptual processing techniques for enhancing the perceptual quality of transcoded video information, communications, entertainment, and other video content. Such systems and methods of video transcoding are operative to perform perceptual processing of an input video bitstream using predetermined information carried by the input bitstream. Having performed such perceptual processing of the input bitstream, the perceptual quality of transcoded video delivered to an end user is significantly improved.

Rate Control Algorithm For Scalable Video Encoding With Disposable P-Frames

View page
US Patent:
20170118474, Apr 27, 2017
Filed:
Jan 9, 2017
Appl. No.:
15/401243
Inventors:
Kyeong Ho Yang - Freehold NJ, US
Sangseok Park - Flower Mound TX, US
International Classification:
H04N 19/147
H04N 19/124
H04N 19/105
H04N 19/503
Abstract:
Systems and methods of performing rate control in scalable video encoders that can be used to provide videoconferencing, announcements, and live video streaming to multiple participant devices having diverse bandwidths, resolutions, and/or other device characteristics. The systems and methods can accommodate different target bit rates of the multiple participant devices by operating on scalable video bitstreams in a multi-layer video format, including a base layer having one or more reference video frames, and an enhancement layer having one or more disposable non-reference, predictive video frames. By adjusting the number of disposable non-reference, predictive video frames in the enhancement layer, as well as quantization parameters for the respective base and enhancement layers, the disclosed systems and methods can accommodate the different target bit rates for the respective participant devices, while enhancing the spatial and/or temporal qualities of the base and enhancement layers in the respective video bitstreams.

Perceptual Optimization For Model-Based Video Encoding

View page
US Patent:
20170070745, Mar 9, 2017
Filed:
Nov 18, 2016
Appl. No.:
15/356142
Inventors:
- Concord MA, US
Sangseok Park - Flower Mound TX, US
Myo Tun - McKinney TX, US
Dane P. Kottke - Durham NC, US
Jeyun Lee - Austin TX, US
Christopher Weed - Sudbury MA, US
International Classification:
H04N 19/513
H04N 19/124
H04N 19/147
H04N 19/61
H04N 19/117
H04N 19/14
H04N 19/159
H04N 19/13
Abstract:
Perceptual statistics are used to compute importance maps that indicate which regions of a video frame are important to the human visual system. Importance maps may be generated from encoders that produce motion vectors and employ motion estimation for inter-prediction. The temporal contrast sensitivity function (TCSF) may be computed from the encoder's motion vectors. Quality metrics may be used to construct a true motion vector map (TMVM), which refines the TCSF. Spatial complexity maps (SCMs) can be calculated from simple metrics (e.g. block variance, block luminance, SSIM, and edge detection). Importance maps with TCSF, TMVM, and SCM may be used to modify the standard rate-distortion optimization criterion for selecting the optimum encoding solution. Importance maps may modify encoder quantization. The spatial information for the importance maps may be provided by a lookup table based on block variance, where negative and positive spatial QP offsets for block variances are provided.

Perceptual Optimization For Model-Based Video Encoding

View page
US Patent:
20160073111, Mar 10, 2016
Filed:
Sep 3, 2015
Appl. No.:
14/845067
Inventors:
- Concord MA, US
Sangseok Park - Flower Mound TX, US
Myo Tun - Randolph NJ, US
Dane P. Kottke - Lebanon NH, US
Jeyun Lee - Austin TX, US
Christopher Weed - Arlington MA, US
International Classification:
H04N 19/124
H04N 19/463
H04N 19/176
Abstract:
Perceptual statistics may be used to compute importance maps that indicate which regions of a video frame are important to the human visual system. Importance maps may be applied to the video encoding process to enhance the quality of encoded bitstreams. The temporal contrast sensitivity function (TCSF) may be computed from the encoder's motion vectors. Motion vector quality metrics may be used to construct a true motion vector map (TMVM) that can be used to refine the TCSF. Spatial complexity maps (SCMs) can be calculated from metrics such as block variance, block luminance, SSIM, and edge strength, and the SCMs can be combined with the TCSF to obtain a unified importance map. Importance maps may be used to improve encoding by modifying the criterion for selecting optimum encoding solutions or by modifying the quantization for each target block to be encoded.

Continuous Block Tracking For Temporal Prediction In Video Encoding

View page
US Patent:
20150256850, Sep 10, 2015
Filed:
Nov 4, 2014
Appl. No.:
14/532947
Inventors:
- Concord MA, US
John J. Guo - Arcadia CA, US
Jeyun Lee - Austin TX, US
Sangseok Park - Flower Mound TX, US
Christopher Weed - Arlington MA, US
Justin Kwan - Brighton MA, US
Nigel Lee - Chestnut Hill MA, US
International Classification:
H04N 19/56
H04N 19/117
H04N 19/517
Abstract:
Continuous block tracking (CBT) tracks macroblock locations over reference frames to produce better inter-predictions than conventional block-based motion estimation/compression. CBT includes frame-to-frame tracking, estimating motion from a frame to a previous frame, and continuous tracking, related frame-to-frame motion vectors to block tracks. Frame-to-frame tracking may include block based or hierarchical motion estimations. CBT combined with enhanced predictive zonal search may create unified motion estimation. Accumulated CBT results may form trajectories for trajectory-based CBT predictions. Metrics measuring continuous track and motion vectors quality can assess relative priority of CBT predictions against non-tracker-based predictions and to modify encoding choices. Continuous tracks can be analyzed for goodness-of-fit to translational motion models, with outliers removed from encoding. Translational motion models can be extended to entire frames in adaptive picture type selection. Outputs from CBT used in look-ahead processing, via look-ahead tracking, may provide rate control and scene change detection for the current frame being encoded.

Rate Control Algorithm For Scalable Video Encoding With Disposable P-Frames

View page
US Patent:
20140376617, Dec 25, 2014
Filed:
Nov 22, 2013
Appl. No.:
14/087264
Inventors:
Kyeong Ho Yang - Freehold NJ, US
Sangseok Park - Flower Mound TX, US
International Classification:
H04N 19/187
H04N 19/50
H04N 19/124
US Classification:
37524003
Abstract:
Systems and methods of performing rate control in scalable video encoders that can be used to provide videoconferencing, announcements, and live video streaming to multiple participant devices having diverse bandwidths, resolutions, and/or other device characteristics. The systems and methods can accommodate different target bit rates of the multiple participant devices by operating on scalable video bitstreams in a multi-layer video format, including a base layer having one or more reference video frames, and an enhancement layer having one or more disposable non-reference, predictive video frames. By adjusting the number of disposable non-reference, predictive video frames in the enhancement layer, as well as quantization parameters for the respective base and enhancement layers, the disclosed systems and methods can accommodate the different target bit rates for the respective participant devices, while enhancing the spatial and/or temporal qualities of the base and enhancement layers in the respective video bitstreams.
Sangseok E Park from Fairfax, VA, age ~50 Get Report