Search

Peter Damron Phones & Addresses

  • 4430 Romilly Way, Fremont, CA 94536 (510) 468-8944
  • Mountain View, CA
  • 4430 Romilly Way, Fremont, CA 94536

Resumes

Resumes

Peter Damron Photo 1

Consulting Member Of Technical Staff

View page
Location:
Fremont, CA
Industry:
Computer Software
Work:
Oracle
Consulting Member of Technical Staff

Oracle
Senior Principal Software Engineer
Skills:
Compilers
Code Generation
Sparc
Solaris
Creativity
Patents
Product Development
Software Architecture
C Language
C++ Language
Compiler Optimization
Computer Science
Assembly Language
Programming
Programming Languages
Fortran
Technical Presentations
Project Planning
Peter Damron Photo 2

Consultant Freelance

View page
Location:
Fremont, CA
Work:

Consultant Freelance

Publications

Us Patents

Method And Apparatus For Performing Prefetching At The Function Level

View page
US Patent:
6421826, Jul 16, 2002
Filed:
Nov 5, 1999
Appl. No.:
09/434715
Inventors:
Nicolai Kosche - San Francisco CA
Peter C. Damron - Fremont CA
Assignee:
Sun Microsystems, Inc. - Palo Alto CA
International Classification:
G06F 944
US Classification:
717161, 717160, 717158
Abstract:
One embodiment of the present invention provides a system for compiling source code into executable code that performs prefetching for memory operations within regions of code that tend to generate cache misses. The system operates by compiling a source code module containing programming language instructions into an executable code module containing instructions suitable for execution by a processor. Next, the system runs the executable code module in a training mode on a representative workload and keeps statistics on cache miss rates for functions within the executable code module. These statistics are used to identify a set of âhotâ functions that generate a large number of cache misses. Next, explicit prefetch instructions are scheduled in advance of memory operations within the set of hot functions. In one embodiment, explicit prefetch operations are scheduled into the executable code module by activating prefetch generation at a start of an identified function, and by deactivating prefetch generation at a return from the identified function.

Method And Apparatus For Performing Prefetching At The Critical Section Level

View page
US Patent:
6427235, Jul 30, 2002
Filed:
Nov 5, 1999
Appl. No.:
09/434714
Inventors:
Nicolai Kosche - San Francisco CA
Peter C. Damron - Fremont CA
Assignee:
Sun Microsystems, Inc. - Palo Alto CA
International Classification:
G06F 944
US Classification:
717148, 717153
Abstract:
One embodiment of the present invention provides a system for compiling source code into executable code that performs prefetching for memory operations within critical sections of code that are subject to mutual exclusion. The system operates by compiling a source code module containing programming language instructions into an executable code module containing instructions suitable for execution by a processor. Next, the system identifies a critical section within the executable code module by identifying a region of code between a mutual exclusion lock operation and a mutual exclusion unlock operation. The system schedules explicit prefetch instructions into the critical section in advance of associated memory operations. In one embodiment, the system identifies the critical section of code by using a first macro to perform the mutual exclusion lock operation, wherein the first macro additionally activates prefetching. The system also uses a second macro to perform the mutual exclusion unlock operation, wherein the second macro additionally deactivates prefetching.

Method And Apparatus For Inserting Data Prefetch Operations Using Data Flow Analysis

View page
US Patent:
6567975, May 20, 2003
Filed:
Nov 8, 1999
Appl. No.:
09/436464
Inventors:
Peter C. Damron - Fremont CA
Assignee:
Sun Microsystems, Inc. - Santa Clara CA
International Classification:
G06F 945
US Classification:
717155, 717156, 714 35
Abstract:
A software method and apparatus for inserting prefetch operations according to data flow analysis. The invention traverses program code to ascertain memory operations and associated address forming operations, and calculates the relative distance between the two operations. If the distance between the two operations is such that a prefetch operation, inserted between the two operations and, in particular to one embodiment, immediately after the address forming operation, would increase the speed of the program when executed, then the prefetch operation is inserted.

Heuristic For Identifying Loads Guaranteed To Hit In Processor Cache

View page
US Patent:
6574713, Jun 3, 2003
Filed:
Oct 10, 2000
Appl. No.:
09/685431
Inventors:
Nicolai Kosche - San Francisco CA
Peter C. Damron - Fremont CA
Assignee:
Sun Microsystems, Inc. - Santa Clara CA
International Classification:
G06F 1200
US Classification:
711137, 711117, 711118, 711213, 711202, 711125
Abstract:
A heuristic algorithm which identifies loads guaranteed to hit the processor cache which further provides a âminimalâ set of prefetches which are scheduled/inserted during compilation of a program is disclosed. The heuristic algorithm of the present invention utilizes the concept of a âcache lineâ (i. e. , the data chunks received during memory operations) in conjunction with the concept of ârelatedâ memory operations for determining which prefetches are unnecessary for related memory operations; thus, generating a minimal number of prefetches for related memory operations.

Cache Memory System And Method For Managing Streaming-Data

View page
US Patent:
6578111, Jun 10, 2003
Filed:
Sep 29, 2000
Appl. No.:
09/677093
Inventors:
Peter C. Damron - Fremont CA
Patricia Shanahan - San Diego CA
Aleksandr Guzovskiy - Lowell MA
Assignee:
Sun Microsystems, Inc. - Santa Clara CA
International Classification:
G06F 1200
US Classification:
711133
Abstract:
A system and method are provided for efficient handling of streaming-data in a cache memory system ( ) having a cache with several cache-lines ( ) capable of storing data. In one aspect, a method is provided for determining before storing data to a cache-line if the storing of data will replace earlier data already stored in cache ( ). If the storing of data will replace data in the cache ( ), it is determined if the data that will be replaced is streaming-data. If the data to be replaced is not streaming-data, it is stored into victim cache ( ). However, if the data to be replaced is streaming-data, it is not stored into the victim cache, thereby improving system efficiency by eliminating the copying of data to be replaced and avoiding replacing other earlier data in victim cache ( ) that may be needed in the future.

System And Method For Identifying Streaming-Data

View page
US Patent:
6598124, Jul 22, 2003
Filed:
Sep 29, 2000
Appl. No.:
09/677096
Inventors:
Peter C. Damron - Fremont CA
Steve Chessin - Mountain View CA
Assignee:
Sun Microsystems, Inc. - Santa Clara CA
International Classification:
G06F 1200
US Classification:
711133
Abstract:
A system and method are provided for efficient handling of streaming-data in a cache memory system having a cache controller and at least one cache with a number of cache-lines, each cache-line including at least one way capable of caching data. In the method a request to cache data is received in cache controller ( ), it is determined from the request whether the data is streaming-data and the data is cached. Optionally, cache-line ( ) includes data-store ( ) in which the data is cached and tag-field ( ) in which information about the data is stored, and the method further includes the step of setting streaming-data-bit ( ) in the tag-field to identify the data as streaming-data. In one embodiment, determining whether the data is streaming-data involves recognizing a special instruction to cache streaming-data in a request from a processor ( ) executing a program or from a compiler compiling a program.

System And Method For Insertion Of Prefetch Instructions By A Compiler

View page
US Patent:
6651245, Nov 18, 2003
Filed:
Oct 3, 2000
Appl. No.:
09/679433
Inventors:
Peter C. Damron - Fremont CA
Nicolai Kosche - San Francisco CA
Assignee:
Sun Microsystems, Inc. - Santa Clara CA
International Classification:
G06F 945
US Classification:
717159, 717151, 711213
Abstract:
The present invention discloses a method and device for placing prefetch instruction in a low-level or assembly code instruction stream. It involves the use of a new concept called a martyr memory operation. When inserting prefetch instructions in a code stream, some instructions will still miss the cache because in some circumstances a prefetch cannot be added at all, or cannot be added early enough to allow the needed reference to be in cache before being referenced by an executing instruction. A subset of these instructions are identified using a new method and designated as martyr memory operations. Once identified, other memory operations that would also have been cache misses can âhideâ behind the martyr memory operation and complete their prefetches while the processor, of necessity, waits for the martyr memory operation instruction to complete. This will increase the number of cache hits.

Region Based Optimizations Using Data Dependence Graphs

View page
US Patent:
6654952, Nov 25, 2003
Filed:
Feb 3, 2000
Appl. No.:
09/497820
Inventors:
Sreekumar Ramakrishnan Nair - Fremont CA
Peter C. Damron - Fremont CA
Assignee:
Sun Microsystems, Inc. - Santa Clara CA
International Classification:
G06F 945
US Classification:
717157, 717143, 717144, 717161
Abstract:
Region based optimization may be accomplished by creating dependence graphs for each block and then incrementally computing a single dependence graph for the region. First dependence DAGs are created for each block in the region. This includes defining incoming and outgoing dangling edges for each block. Each dependence DAG is then linked as a control flow graph. Examining of each incoming dangling edge within each block of the region then takes place, with the process traversing each path along the control flow graph in reverse, attempting to match each incoming dangling edge with a corresponding incoming or outgoing dangling edge, stopping only if an outgoing match is found, the same block is examined twice, or the top of the region is found. A similar process takes place for each outgoing dangling edge, traversing each path along the control flow path forward, attempting to match each outgoing dangling edge with a corresponding incoming dangling edge, stopping only if a match is found, the same block is examined twice, or the bottom of the region is found. The region may then be reduced to a single block with incoming dangling edges being any unmatched incoming dangling edges at the top of the region and outgoing dangling edges being any unmatched outgoing dangling edges at the bottom of the region.
Peter C Damron from Fremont, CA, age ~66 Get Report