Search

Rajeshwaran Aarabi Selvanesan

from Cupertino, CA
Age ~51

Rajeshwaran Selvanesan Phones & Addresses

  • 622 Miller Ave, Cupertino, CA 95014
  • San Jose, CA
  • 1158 Fox Hollow Ct, Milpitas, CA 95035 (408) 934-9480
  • Santa Clara, CA
  • 3257 Falls Creek Dr, San Jose, CA 95135 (408) 499-1075

Work

Position: Precision Production Occupations

Education

Degree: Graduate or professional degree

Publications

Us Patents

Method And Apparatus For Generating Random Numbers For Use In A Field Programmable Gate Array

View page
US Patent:
7509361, Mar 24, 2009
Filed:
May 30, 2006
Appl. No.:
11/443771
Inventors:
Stephen Clark Purcell - Mountain View CA, US
Scott Kimura - San Jose CA, US
Rajeshwaran Selvanesan - Milpitas CA, US
Assignee:
Pastemak Solutions LLC - Los Altos CA
International Classification:
G06F 1/02
US Classification:
708256
Abstract:
A method and apparatus for generating random number outputs utilized in generating a noise function at a given location in space. The method consists of partitioning selected portions of the random number generation process to achieve outputs in parallel. The relevant parallel outputs are weighted by effect and then summed together to give the amplitude of the noise function at the given location.

Method And System For Automating Unit Performance Testing In Integrated Circuit Design

View page
US Patent:
7870524, Jan 11, 2011
Filed:
Sep 24, 2007
Appl. No.:
11/860475
Inventors:
Robert A. Alfieri - Chapel Hill NC, US
Rajeshwaran Selvanesan - Milpitas CA, US
Prasad Gharpure - Fremont CA, US
Assignee:
NVIDIA Corporation - Santa Clara CA
International Classification:
G06F 17/50
US Classification:
716 5, 716 1, 716 2, 716 4, 716 6, 716 18, 703 13, 703 14
Abstract:
A method and system for automating unit performance testing in integrated circuit design is disclosed. One embodiment of the present invention sets forth a method, which includes the steps of generating a first performance data for the unit to operate on a workload, embedding the first performance data in the workload for a register transfer level (RTL) implementation of the unit to operate on, and determining whether the expected performance of the unit is achieved based on the comparison between the first performance data and a second performance data, wherein the second performance data is generated after the RTL implementation of the unit operates on the workload.

Cache Interface Protocol Including Arbitration And Hints

View page
US Patent:
8266382, Sep 11, 2012
Filed:
Dec 30, 2009
Appl. No.:
12/650214
Inventors:
Alexander L. Minkin - Los Altos CA, US
Steven J. Heinrich - Madison AL, US
Rajeshwaran Selvanesan - Milpitas CA, US
Charles McCarver - Madison AL, US
Stewart Glenn Carlton - Madison AL, US
Anjana Rajendran - San Jose CA, US
Yan Yan Tang - San Jose CA, US
Assignee:
NVIDIA Corporation - Santa Clara CA
International Classification:
G06F 13/00
G06F 13/28
G06F 3/00
G06F 5/00
US Classification:
711118, 711150, 711167, 711E12049, 710 6, 710 60
Abstract:
One embodiment of the present invention sets forth a technique for arbitrating requests received from one of the multiple clients of an L1 cache and for providing hints to the client to assist in arbitration. The L1 cache services multiple clients with diverse latency and bandwidth requirements and may be reconfigured to provide memory spaces for clients executing multiple parallel threads, where the memory spaces each have a different scope.

Cache Miss Processing Using A Defer/Replay Mechanism

View page
US Patent:
8266383, Sep 11, 2012
Filed:
Dec 30, 2009
Appl. No.:
12/650189
Inventors:
Alexander L. Minkin - Los Altos CA, US
Steven J. Heinrich - Madison AL, US
Rajeshwaran Selvanesan - Milpitas CA, US
Charles McCarver - Madison AL, US
Stewart Glenn Carlton - Madison AL, US
Ming Y. Siu - Santa Clara CA, US
Yan Yan Tang - San Jose CA, US
Robert J. Stoll - Los Altos CA, US
Assignee:
NVIDIA Corporation - Santa Clara CA
International Classification:
G06F 13/00
G06F 12/00
G06F 3/00
G06F 5/00
US Classification:
711125, 711E12017, 710 6, 710 39
Abstract:
One embodiment of the present invention sets forth a technique for processing cache misses resulting from a request received from one of the multiple clients of an L1 cache. The L1 cache services multiple clients with diverse latency and bandwidth requirements, including at least one client whose requests cannot be stalled. The L1 cache includes storage to buffer pending requests for caches misses. When an entry is available to store a pending request, a request causing a cache miss is accepted. When the data for a read request becomes available, the cache instructs the client to resubmit the read request to receive the data. When an entry is not available to store a pending request, a request causing a cache miss is deferred and the cache provides the client with status information that is used to determine when the request should be resubmitted.

Cache Arbitration Between Multiple Clients

View page
US Patent:
8335892, Dec 18, 2012
Filed:
Dec 30, 2009
Appl. No.:
12/650226
Inventors:
Alexander L. Minkin - Los Altos CA, US
Steven J. Heinrich - Madison AL, US
Rajeshwaran Selvanesan - Milpitas CA, US
Charles McCarver - Madison AL, US
Stewart Glenn Carlton - Madison AL, US
Anjana Rajendran - San Jose CA, US
Assignee:
NVIDIA Corporation - Santa Clara CA
International Classification:
G06F 12/00
G06F 13/00
G06F 13/28
US Classification:
711118, 711150, 711167, 711E12049, 710 6, 710 60
Abstract:
One embodiment of the present invention sets forth a technique for arbitrating requests received by an L1 cache from multiple clients. The L1 cache outputs bubble requests to a first one of the multiple clients that cause the first one of the multiple clients to insert bubbles into the request stream, where a bubble is the absence of a request. The bubbles allow the L1 cache to grant access to another one of the multiple clients without stalling the first one of the multiple clients. The L1 cache services multiple clients with diverse latency and bandwidth requirements and may be reconfigured to provide memory spaces for clients executing multiple parallel threads, where the memory spaces each have a different scope.

Configurable Cache For Multiple Clients

View page
US Patent:
8595425, Nov 26, 2013
Filed:
Sep 25, 2009
Appl. No.:
12/567445
Inventors:
Alexander L. Minkin - Los Altos CA, US
Steven James Heinrich - Madison AL, US
RaJeshwaran Selvanesan - Milpitas CA, US
Brett W. Coon - San Jose CA, US
Charles McCarver - Madison AL, US
Anjana Rajendran - San Jose CA, US
Stewart G. Carlton - Madison AL, US
Assignee:
NVIDIA Corporation - Santa Clara CA
International Classification:
G06F 12/00
US Classification:
711104, 711141, 711117, 711118, 711129, 711E12046
Abstract:
One embodiment of the present invention sets forth a technique for providing a L1 cache that is a central storage resource. The L1 cache services multiple clients with diverse latency and bandwidth requirements. The L1 cache may be reconfigured to create multiple storage spaces enabling the L1 cache may replace dedicated buffers, caches, and FIFOs in previous architectures. A “direct mapped” storage region that is configured within the L1 cache may replace dedicated buffers, FIFOs, and interface paths, allowing clients of the L1 cache to exchange attribute and primitive data. The direct mapped storage region may used as a global register file. A “local and global cache” storage region configured within the L1 cache may be used to support load/store memory requests to multiple spaces. These spaces include global, local, and call-return stack (CRS) memory.

Cache Operations And Policies For A Multi-Threaded Client

View page
US Patent:
20110078381, Mar 31, 2011
Filed:
Sep 24, 2010
Appl. No.:
12/890476
Inventors:
Steven James HEINRICH - Madison AL, US
Alexander L. Minkin - Los Altos CA, US
Brett W. Coon - San Jose CA, US
Rajeshwaran Selvanesan - Milpitas CA, US
Robert Steven Glanville - Cupertino CA, US
Charles McCarver - Madison AL, US
Anjana Rajendran - San Jose CA, US
Stewart Glenn Carlton - Madison AL, US
John R. Nickolls - Los Altos CA, US
Brian Fahs - Los Altos CA, US
International Classification:
G06F 12/08
G06F 12/00
US Classification:
711122, 711133, 711E12001, 711E12024, 711E12022
Abstract:
A method for managing a parallel cache hierarchy in a processing unit. The method including receiving an instruction that includes a cache operations modifier that identifies a level of the parallel cache hierarchy in which to cache data associated with the instruction; and implementing a cache replacement policy based on the cache operations modifier.

Sharing Data Crossbar For Reads And Writes In A Data Cache

View page
US Patent:
20110082961, Apr 7, 2011
Filed:
Sep 28, 2010
Appl. No.:
12/892862
Inventors:
Alexander L. Minkin - Los Altos CA, US
Steven L. Heinrich - Madison AL, US
Rajeshwaran Selvanesan - Milpitas CA, US
Stewart Glenn Carlton - Madison AL, US
John R. Nickolls - Los Altos CA, US
International Classification:
G06F 13/36
G06F 13/00
US Classification:
710309, 710317
Abstract:
The invention sets forth an L1 cache architecture that includes a crossbar unit configured to transmit data associated with both read data requests and write data requests. Data associated with read data requests is retrieved from a cache memory and transmitted to the client subsystems. Similarly, data associated with write data requests is transmitted from the client subsystems to the cache memory. To allow for the transmission of both read and write data on the crossbar unit, an arbiter is configured to schedule the crossbar unit transmissions as well and arbitrate between data requests received from the client subsystems.
Rajeshwaran Aarabi Selvanesan from Cupertino, CA, age ~51 Get Report