Search

Wolf-Dietrich Weber Phones & Addresses

  • 5851 Meander Dr, San Jose, CA 95120 (415) 327-5825

Publications

Us Patents

System And Method For Acknowledging Receipt Of Messages Within A Packet Based Communication Network

View page
US Patent:
6393023, May 21, 2002
Filed:
May 8, 1998
Appl. No.:
09/075461
Inventors:
Takeshi Shimizu - Kanagawa, JP
Wolf-Dietrich Weber - La Honda CA
Patrick J. Helland - Redmond WA
Thomas M. Wicki - Palo Alto CA
Winfried W. Wilcke - San Jose CA
Assignee:
Fujitsu Limited - Kawasaki
International Classification:
H04L 1256
US Classification:
370389, 370469
Abstract:
A system and method for acknowledging receipt of messages within a packet based communication network. A sending node generates a data packet within an upper layer, and transmits the data packet to a receiving node using a lower layer. The lower layer generates and transmits a pseudo reply packet to the upper layer in response to an acknowledgment received from the receiving node. The pseudo reply packet notifies the upper layer of the sending node that the receiving node successfully received the data packet and removes the burden of having an upper layer of the receiving node generate an actual reply packet.

System And Method For Avoiding Deadlock In Multi-Node Network

View page
US Patent:
6490630, Dec 3, 2002
Filed:
Apr 2, 1999
Appl. No.:
09/285316
Inventors:
Wing Leong Poon - Santa Clara CA
Patrick J. Helland - Bellevue WA
Takeshi Shimizu - San Jose CA
Yasushi Umezawa - Cupertino CA
Wolf-Dietrich Weber - San Jose CA
Assignee:
Fujitsu Limited
International Classification:
G06F 1516
US Classification:
709235, 709232, 709234, 370229, 370231, 370232
Abstract:
A computer architecture for avoiding a deadlock condition in an interconnection network comprises a messaging buffer having a size pre-calculated to temporarily store outgoing messages from a node. Messages are classified according to their service requirements and messaging protocols, and reserved quotas in the messaging buffer are allocated for different types of messages. The allocations of the reserved quotas are controlled by a mechanism that, to prevent overflow, limits the maximum number of messages that can be outstanding at any time. The messaging buffer is sized large enough to guarantee that a node is always able to service incoming messages, thereby avoiding deadlock and facilitating forward progress in communications. The buffer may be bypassed to improve system performance when the buffer is empty or when data in the buffer is corrupted. In addition, a multicast engine facilitates dense packing of the buffer and derives information from a message header to determine whether there is a multicast to perform and to permit passage of messages.

Split Sparse Directory For A Distributed Shared Memory Multiprocessor System

View page
US Patent:
6560681, May 6, 2003
Filed:
Mar 30, 1999
Appl. No.:
09/281714
Inventors:
James Christopher Wilson - Los Gatos CA
Wolf-Dietrich Weber - San Jose CA
Assignee:
Fujitsu Limited
International Classification:
G06F 1200
US Classification:
711144, 711146
Abstract:
A split sparse directory for a distributed shared memory multiprocessor system with multiple nodes, each node including a plurality of processors, each processor having an associated cache. The split sparse directory is in a memory subsystem which includes a coherence controller, a temporary state buffer and an external directory. The split sparse directory stores information concerning the cache lines in the node, with the temporary state buffer holding state information about transient cache lines and the external directory holding state information about non-transient cache lines.

Method And Apparatus For Scheduling Requests Using Ordered Stages Of Scheduling Criteria

View page
US Patent:
6578117, Jun 10, 2003
Filed:
Oct 12, 2001
Appl. No.:
09/977517
Inventors:
Wolf-Dietrich Weber - San Jose CA
Assignee:
Sonics, Inc. - Mountain View CA
International Classification:
G06F 1318
US Classification:
711151
Abstract:
The present invention provides for the scheduling of requests to one resource from a plurality of initiator devices. In one embodiment, scheduling of requests within threads and scheduling of initiator device access is performed wherein requests are only reordered between threads.

System And Method For Allocating A Directory Entry For Use In Multiprocessor-Node Data Processing Systems

View page
US Patent:
6625694, Sep 23, 2003
Filed:
May 3, 1999
Appl. No.:
09/304432
Inventors:
Nabil N. Masri - Milpitas CA
Wolf-Dietrich Weber - La Honda CA
Assignee:
Fujitsu Ltd. - Kawasaki
International Classification:
G06F 1212
US Classification:
711133, 711144, 711159
Abstract:
An algorithm for selecting a directory entry in a multiprocessor-node system. In response to a memory request from a processor in a processor node, the algorithm finds an available entry to store information about the requested memory line. If at least one entry is available, then the algorithm uses one of the available entries. Otherwise, the algorithm searches for a âsharedâ entry. If at least one shared entry is available, then the algorithm uses one of the shared entries. Otherwise, the algorithm searches for a âdirtyâ entry. If at least one dirty entry is available, then the algorithm uses one of the dirty entries. In selecting a directory entry, the algorithm uses a âleast-recently-usedâ (LRU) algorithm because an entry that was not recently used is more likely to be stale. Further, to improve system performance, the algorithm preferably uses a shared entry before using a dirty entry. In the preferred embodiment, the processor node that utilizes the invention includes at least one processor having a respective cache connected via a bus to main memory.

Cache Coherence Unit For Interconnecting Multiprocessor Nodes Having Pipelined Snoopy Protocol

View page
US Patent:
6631448, Oct 7, 2003
Filed:
Mar 12, 1998
Appl. No.:
09/041568
Inventors:
Wolf-Dietrich Weber - La Honda CA
Assignee:
Fujitsu Limited
International Classification:
G06F 1208
US Classification:
711141, 711144, 711145, 711146, 711119
Abstract:
The present invention consists of a cache coherence protocol within a cache coherence unit for use in a data processing system. The data processing system is comprised of multiple nodes, each node having a plurality of processors with associated caches, a memory, and input/output. The processors within the node are coupled to a memory bus operating according to a âsnoopyâ protocol. This invention includes a cache coherence protocol for a sparse directory in combination with the multiprocessor nodes. In addition, the invention has the following features: the current state and information from the incoming bus request are used to make an immediate decision on actions and next state; the decision mechanism for outgoing coherence is pipelined to follow the bus; and the incoming coherence pipeline acts independently of the outgoing coherence pipeline.

Method And Apparatus For Response Modes In Pipelined Environment

View page
US Patent:
6785753, Aug 31, 2004
Filed:
Jun 1, 2001
Appl. No.:
09/872126
Inventors:
Wolf-Dietrich Weber - San Jose CA
Jay S. Tomlinson - San Jose CA
Drew E. Wingard - San Carlos CA
Assignee:
Sonics, Inc. - Mountain View CA
International Classification:
G06F 1342
US Classification:
710105, 710107
Abstract:
A pipelined network is disclosed which provides for at least one mode to control the state of a response flag and when the target device is unable to respond to an initiator device request.

Shared Memory Multiprocessor Expansion Port For Multi-Node Systems

View page
US Patent:
6799217, Sep 28, 2004
Filed:
Jun 4, 2001
Appl. No.:
09/874189
Inventors:
James C. Wilson - Los Gatos CA
Wolf-Dietrich Weber - San Jose CA
Assignee:
Fujitsu Limited
International Classification:
G06F 1516
US Classification:
709230, 709214, 710105
Abstract:
A method for communicating between nodes of a plurality of nodes is disclosed. Each node includes a plurality of processors and an interconnect chipset. The method issues a request for data from a processor in a first node and passes the request for data to other nodes through an expansion port (or scalability port). The method also starts an access of a memory in response to the request for data and snoops a processor cache of each processor in each node. The method accordingly identifies the location of the data in either the processor cache or memory in the node having the processor issuing the request or in a processor cache or memory of another node. A method for requesting data between two directly coupled nodes in a router system is also disclosed. A method for requesting data between three or more nodes in an interconnect system is also disclosed. A method for resolving crossing cases in an interconnect system is also disclosed.
Wolf-Dietrich Weber from San Jose, CA Get Report