Search

Junghyun Kwon

from Centennial, CO
Age ~48

Junghyun Kwon Phones & Addresses

  • 15413 E Powers Dr, Aurora, CO 80015 (720) 287-4199
  • Centennial, CO
  • Saratoga, CA
  • San Jose, CA

Resumes

Resumes

Junghyun Kwon Photo 1

Junghyun Kwon

View page
Junghyun Kwon Photo 2

Procurement Assistant Manager

View page
Work:
Hyundai Steel
Procurement Assistant Manager
Skills:
Sap Erp
Microsoft Office
Junghyun Kwon Photo 3

Junghyun Kwon

View page

Publications

Us Patents

Performing Non-Maximum Suppression In Parallel

View page
US Patent:
20220222477, Jul 14, 2022
Filed:
Jan 14, 2021
Appl. No.:
17/149437
Inventors:
- Santa Clara CA, US
Wanli Jiang - Shanghai, CN
Junghyun Kwon - Santa Clara CA, US
Siyi Li - Shanghai, CN
Minwoo Park - Saratoga CA, US
Sangmin Oh - San Jose CA, US
International Classification:
G06K 9/00
G06K 9/62
G06K 9/20
G06K 9/46
Abstract:
Apparatuses, systems, and techniques to perform non-maximum suppression (NMS) in parallel to remove redundant bounding boxes. In at least one embodiment, two or more parallel circuits to perform two or more portions of a NMS algorithm in parallel to remove one or more redundant bounding boxes corresponding to one or more objects within one or more digital images.

Neighboring Bounding Box Aggregation For Neural Networks

View page
US Patent:
20220222480, Jul 14, 2022
Filed:
Jan 27, 2021
Appl. No.:
17/160271
Inventors:
- Santa Clara CA, US
Yichun Shen - Shanghai, CN
Junghyun Kwon - Santa Clara CA, US
Siyi Li - Shanghai, CN
Sangmin Oh - San Jose CA, US
Minwoo Park - Saratoga CA, US
International Classification:
G06K 9/20
G06K 9/00
G06N 3/08
Abstract:
Apparatuses, systems, and techniques to generate bounding box information. In at least one embodiment, for example, bounding box information is generated based, at least in part, on a plurality of candidate bounding box information.

Assigning Obstacles To Lanes Using Neural Networks For Autonomous Machine Applications

View page
US Patent:
20230099494, Mar 30, 2023
Filed:
Sep 29, 2021
Appl. No.:
17/489346
Inventors:
- Santa Clara CA, US
Neeraj Sajjan - Santa Clara CA, US
Sangmin Oh - San Jose CA, US
David Nister - Bellevue WA, US
Junghyun Kwon - Santa Clara CA, US
Minwoo Park - Santa Clara CA, US
International Classification:
G06K 9/00
G06K 9/32
G06N 3/08
Abstract:
In various examples, live perception from sensors of an ego-machine may be leveraged to detect objects and assign the objects to bounded regions (e.g., lanes or a roadway) in an environment of the ego-machine in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute outputs—such as output segmentation masks—that may correspond to a combination of object classification and lane identifiers. The output masks may be post-processed to determine object to lane assignments that assign detected objects to lanes in order to aid an autonomous or semi-autonomous machine in a surrounding environment.

Sensor Fusion For Autonomous Machine Applications Using Machine Learning

View page
US Patent:
20210406560, Dec 30, 2021
Filed:
Jun 21, 2021
Appl. No.:
17/353231
Inventors:
- Santa Clara CA, US
Junghyun Kwon - Santa Clara CA, US
Mehmet K. Kocamaz - San Jose CA, US
Hae-Jong Seo - Campbell CA, US
Berta Rodriguez Hervas - San Francisco CA, US
Tae Eun Choe - Belmont CA, US
International Classification:
G06K 9/00
B60W 60/00
G06T 7/292
Abstract:
In various examples, a multi-sensor fusion machine learning model—such as a deep neural network (DNN)—may be deployed to fuse data from a plurality of individual machine learning models. As such, the multi-sensor fusion network may use outputs from a plurality of machine learning models as input to generate a fused output that represents data from fields of view or sensory fields of each of the sensors supplying the machine learning models, while accounting for learned associations between boundary or overlap regions of the various fields of view of the source sensors. In this way, the fused output may be less likely to include duplicate, inaccurate, or noisy data with respect to objects or features in the environment, as the fusion network may be trained to account for multiple instances of a same object appearing in different input representations.

Object Detection Using Image Alignment For Autonomous Machine Applications

View page
US Patent:
20210264175, Aug 26, 2021
Filed:
Feb 26, 2021
Appl. No.:
17/187228
Inventors:
- Santa Clara CA, US
Sangmin Oh - San Jose CA, US
Junghyun Kwon - Santa Clara CA, US
Baris Evrim Demiroz - San Jose CA, US
Tae Eun Choe - Belmont CA, US
Minwoo Park - Saratoga CA, US
Chethan Ningaraju - Munich, DE
Hao Tsui - Munich, DE
Eric Viscito - Shelburne VT, US
Jagadeesh Sankaran - Dublin CA, US
Yongqing Liang - San Jose CA, US
International Classification:
G06K 9/00
G06K 9/62
G06K 9/32
G06N 3/08
B60W 60/00
Abstract:
Systems and methods are disclosed that use a geometric approach to detect objects on a road surface. A set of points within a region of interest between a first frame and a second frame are captured and tracked to determine a difference in location between the set of points in two frames. The first frame may be aligned with the second frame and the first pixel values of the first frame may be compared with the second pixel values of the second frame to generate a disparity image including third pixels. One or more subsets of the third pixels that have an disparity image value about a first threshold may be combined, and the third pixels may be scored and associated with disparity values for each pixel of the one or more subsets of the third pixels. A bounding shape may be generated based on the scoring that corresponds to the object.

Intersection Detection And Classification In Autonomous Machine Applications

View page
US Patent:
20200293796, Sep 17, 2020
Filed:
Mar 10, 2020
Appl. No.:
16/814351
Inventors:
- Santa Clara CA, US
Berta Rodriguez Hervas - San Francisco CA, US
Hang Dou - Fremont CA, US
Igor Tryndin - Fremont CA, US
David Nister - Bellevue WA, US
Minwoo Park - Saratoga CA, US
Neda Cvijetic - East Palo Alto CA, US
Junghyun Kwon - San Jose CA, US
Trung Pham - Santa Clara CA, US
International Classification:
G06K 9/00
G06K 9/62
G06N 3/08
B60W 30/09
B60W 30/095
B60W 60/00
G08G 1/01
Abstract:
In various examples, live perception from sensors of a vehicle may be leveraged to detect and classify intersections in an environment of a vehicle in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as bounding box coordinates for intersections, intersection coverage maps corresponding to the bounding boxes, intersection attributes, distances to intersections, and/or distance coverage maps associated with the intersections. The outputs may be decoded and/or post-processed to determine final locations of, distances to, and/or attributes of the detected intersections.

Object Detection Using Skewed Polygons Suitable For Parking Space Detection

View page
US Patent:
20200294310, Sep 17, 2020
Filed:
Mar 16, 2020
Appl. No.:
16/820164
Inventors:
- Santa Clara CA, US
Junghyun Kwon - San Jose CA, US
Sangmin Oh - San Jose CA, US
Wenchao Zheng - San Jose CA, US
Hae-Jong Seo - San Jose CA, US
David Nister - Bellevue WA, US
Berta Rodriguez Hervas - San Francisco CA, US
International Classification:
G06T 17/30
G06T 7/40
Abstract:
A neural network may be used to determine corner points of a skewed polygon (e.g., as displacement values to anchor box corner points) that accurately delineate a region in an image that defines a parking space. Further, the neural network may output confidence values predicting likelihoods that corner points of an anchor box correspond to an entrance to the parking spot. The confidence values may be used to select a subset of the corner points of the anchor box and/or skewed polygon in order to define the entrance to the parking spot. A minimum aggregate distance between corner points of a skewed polygon predicted using the CNN(s) and ground truth corner points of a parking spot may be used simplify a determination as to whether an anchor box should be used as a positive sample for training.

Hybrid Detection Recognition System

View page
US Patent:
20190163698, May 30, 2019
Filed:
Feb 1, 2019
Appl. No.:
16/265149
Inventors:
Junghyun Kwon - Cupertino CA, US
Ramya Narasimha - Palo Alto CA, US
Edward L. Schwartz - Menlo Park CA, US
Max McFarland - Sunnyvale CA, US
Silvio Savarese - Stanford CA, US
Kathrin Berkner - Los Altos CA, US
Assignee:
Ricoh Company, Ltd. - Tokyo
International Classification:
G06F 16/51
G06K 9/62
G06F 16/583
G06K 9/46
G06Q 30/02
Abstract:
A system and method for determining an object or product represented in an image is disclosed. The system receives a first image, determines a region of interest in the first image, determines a classification score for the region of interest using a convolutional neural network that assigns the region of interest the classification score corresponding to a class, and identifies a first product in the first image based on the classification score.
Junghyun Kwon from Centennial, CO, age ~48 Get Report