Search

Brian Taba Phones & Addresses

  • San Jose, CA
  • 253 21St St, Philadelphia, PA 19103 (215) 545-8349
  • 253 S 21St St #2F, Philadelphia, PA 19103 (215) 545-8349
  • Phila, PA
  • 22715 San Juan Rd, Cupertino, CA 95014 (408) 255-9395
  • Pasadena, CA
  • 253 S 21St St, Philadelphia, PA 19103 (215) 669-7439

Work

Position: Administrative Support Occupations, Including Clerical Occupations

Emails

b***a@rcn.com

Resumes

Resumes

Brian Taba Photo 1

Research Staff Member

View page
Location:
2372 Walnut Grove Ave, San Jose, CA 95128
Industry:
Research
Work:
Ibm May 2013 - Nov 2014
Software Engineer

Ibm May 2013 - Nov 2014
Research Staff Member

Isense Medical Corp Jan 1, 2011 - May 2013
Principal Software Scientist

Evolved Machines Apr 2006 - Dec 2010
Research Scientist
Education:
University of Pennsylvania 1999 - 2005
Doctorates, Doctor of Philosophy, Bioengineering
Caltech 1995 - 1999
Bachelors, Bachelor of Science, Electrical Engineering
Skills:
Matlab
Signal Processing
R&D
Machine Learning
Python
Materials Science
Mathematical Modeling
Biomedical Engineering
Languages:
English
Brian Taba Photo 2

Brian Taba

View page

Publications

Us Patents

Generating Simulated Neural Circuits

View page
US Patent:
20080228682, Sep 18, 2008
Filed:
Mar 17, 2008
Appl. No.:
12/049899
Inventors:
Paul A. Rhodes - Palm Beach FL, US
Brian Seisho Taba - Cupertino CA, US
Assignee:
EVOLVED MACHINES, INC. - West Palm Beach FL
International Classification:
G06N 3/06
US Classification:
706 26
Abstract:
A first array of simulated neurons having trees of output branches and a second array of simulated neurons having trees of input branches are generated. Thereafter, the output branches of one or more of the simulated neurons of the first array and the input branches of one or more of the simulated neurons of the second array are grown and connections are formed between individual output branches of the simulated neurons of the first array and individual input branches of the simulated neurons of the second array that grow to within a vicinity of each other.

Generating Simulated Neural Circuits In A Voxel Space

View page
US Patent:
20080262813, Oct 23, 2008
Filed:
Mar 17, 2008
Appl. No.:
12/049931
Inventors:
Paul A. Rhodes - Palm Beach FL, US
Brian Seisho Taba - Cupertino CA, US
Assignee:
EVOLVED MACHINES, INC. - West Palm Beach FL
International Classification:
G06G 7/60
US Classification:
703 11
Abstract:
A three-dimensional voxel space is generated in which to generate a simulated neural circuit. The voxel space includes a plurality of voxels that store localized information. After the voxel space is generated, a plurality of simulated branched neurons, each of which has one or more input and/or output branches that occupy at least one of the voxels, are embedded in the voxel space. One or more of the branches of the plurality of simulated neurons then are generated in a manner that changes the voxels occupied by the grown branches, and the localized information stored in the voxels is updated to reflect the changes in the voxels occupied by the grown branches.

Generating Simulated Neural Circuits In A Voxel Space

View page
US Patent:
20130044110, Feb 21, 2013
Filed:
Apr 20, 2012
Appl. No.:
13/452640
Inventors:
Paul A. Rhodes - Palo Alto CA, US
Brian Seisho Taba - Cupertino CA, US
Assignee:
EVOLVED MACHINES, INC. - West Palm Beach FL
International Classification:
G06T 17/00
US Classification:
345424
Abstract:
A three-dimensional voxel space is generated in which to generate a simulated neural circuit. The voxel space includes a plurality of voxels that store localized information. After the voxel space is generated, a plurality of simulated branched neurons, each of which has one or more input and/or output branches that occupy at least one of the voxels, are embedded in the voxel space. One or more of the branches of the plurality of simulated neurons then are generated in a manner that changes the voxels occupied by the grown branches, and the localized information stored in the voxels is updated to reflect the changes in the voxels occupied by the grown branches.

An Efficient Method For Vlsi Implementation Of Useful Neural Network Activation Functions

View page
US Patent:
20220180177, Jun 9, 2022
Filed:
Dec 8, 2020
Appl. No.:
17/115285
Inventors:
- Armonk NY, US
Myron D. Flickner - San Jose CA, US
Andrew Stephen Cassidy - Austin TX, US
John Vernon Arthur - San Jose CA, US
Pallab Datta - San Jose CA, US
Dharmendra S. Modha - San Jose CA, US
Steven Kyle Esser - San Jose CA, US
Brian Seisho Taba - Cupertino CA, US
Jennifer Klamo - San Jose CA, US
Rathinakumar Appuswamy - San Jose CA, US
Filipp Akopyan - New Windsor NY, US
Carlos Ortega Otero - Los Angeles CA, US
International Classification:
G06N 3/08
G06N 3/063
G06N 3/04
G06N 5/04
Abstract:
A neural inference chip is provided, including at least one neural inference core. The at least one neural inference core is adapted to apply a plurality of synaptic weights to a plurality of input activations to produce a plurality of intermediate outputs. The at least one neural inference core comprises a plurality of activation units configured to receive the plurality of intermediate outputs and produce a plurality of activations. Each of the plurality of activation units is configured to apply a configurable activation function to its input. The configurable activation function has at least a re-ranging term and a scaling term, the re-ranging term determining the range of the activations and the scaling term determining the scale of the activations. Each of the plurality of activations units is configured to obtain the re-ranging term and the scaling term from one or more look up tables.

Runtime Reconfigurable Neural Network Processor Core

View page
US Patent:
20230062217, Mar 2, 2023
Filed:
Oct 13, 2022
Appl. No.:
18/046301
Inventors:
- Armonk NY, US
Rathinakumar Appuswamy - San Jose CA, US
John V. Arthur - Mountain View CA, US
Pallab Datta - San Jose CA, US
Steven K. Esser - San Jose CA, US
Myron D. Flickner - San Jose CA, US
Jennifer Klamo - San Jose CA, US
Dharmendra S. Modha - San Jose CA, US
Hartmut Penner - San Jose CA, US
Jun Sawada - Austin TX, US
Brian Taba - Cupertino CA, US
International Classification:
G06N 3/063
G06N 3/04
G06N 5/04
G11C 11/54
G06N 3/08
Abstract:
Hardware neural network processors, are provided. A neural core includes a weight memory, an activation memory, a vector-matrix multiplier, and a vector processor. The vector-matrix multiplier is adapted to receive a weight matrix from the weight memory, receive an activation vector from the activation memory, and compute a vector-matrix multiplication of the weight matrix and the activation vector. The vector processor is adapted to receive one or more input vector from one or more vector source and perform one or more vector functions on the one or more input vector to yield an output vector. In some embodiments a programmable controller is adapted to configure and operate the neural core.

Neural Network Weight Distribution From A Grid Of Memory Elements

View page
US Patent:
20210312305, Oct 7, 2021
Filed:
Apr 7, 2020
Appl. No.:
16/842035
Inventors:
- Armonk NY, US
Dharmendra S. Modha - San Jose CA, US
Andrew S. Cassidy - San Jose CA, US
John V. Arthur - Mountain View CA, US
Tapan K. Nayak - San Jose CA, US
Carlos O. Otero - San Jose CA, US
Brian Taba - Cupertino CA, US
Filipp A. Akopyan - New Windsor NY, US
Pallab Datta - San Jose CA, US
International Classification:
G06N 5/04
G06N 3/063
Abstract:
Neural inference chips for computing neural activations are provided. In various embodiments, a neural inference chip comprises at least one neural core, a memory array, an instruction buffer, and an instruction memory. The instruction buffer has a position corresponding to each of a plurality of elements of the memory array. The instruction memory provides at least one instruction to the instruction buffer. The instruction buffer advances the at least one instruction between positions in the instruction buffer. The instruction buffer provides the at least one instruction to at least one of the plurality of elements of the memory array from its associated position in the instruction buffer when the memory of the at least one of the plurality of elements contains data associated with the at least one instruction. Each element of the memory array provides a data block from its memory to its horizontal buffer in response to the arrival of an associated instruction from the instruction buffer. The horizontal buffer of each element of the memory array provides a data block to the horizontal buffer of another of the elements of the memory array or to the at least one neural core.

Massively Parallel Neural Inference Computing Elements

View page
US Patent:
20200202205, Jun 25, 2020
Filed:
Mar 4, 2020
Appl. No.:
16/808900
Inventors:
- Armonk NY, US
John V. Arthur - Mountain View CA, US
Andrew S. Cassidy - San Jose CA, US
Pallab Datta - San Jose CA, US
Steven K. Esser - San Jose CA, US
Myron D. Flickner - San Jose CA, US
Jennifer Klamo - San Jose CA, US
Dharmendra S. Modha - San Jose CA, US
Hartmut Penner - San Jose CA, US
Jun Sawada - Austin TX, US
Brian Taba - Cupertino CA, US
International Classification:
G06N 3/063
G06F 5/01
G06F 7/505
G06F 7/53
G06F 17/16
Abstract:
Massively parallel neural inference computing elements are provided. A plurality of multipliers is arranged in a plurality of equal-sized groups. Each of the plurality of multipliers is adapted to, in parallel, apply a weight to an input activation to generate an output. A plurality of adders is operatively coupled to one of the groups of multipliers. Each of the plurality of adders is adapted to, in parallel, add the outputs of the multipliers within its associated group to generate a partial sum. A plurality of function blocks is operatively coupled to one of the plurality of adders. Each of the plurality of function blocks is adapted to, in parallel, apply a function to the partial sum of its associated adder to generate an output value.

Compound Instruction Set Architecture For A Neural Inference Chip

View page
US Patent:
20200167158, May 28, 2020
Filed:
Nov 28, 2018
Appl. No.:
16/202871
Inventors:
- Armonk NY, US
Rathinakumar Appuswamy - San Jose CA, US
John V. Arthur - Mountain View CA, US
Pallab Datta - San Jose CA, US
Michael V. Debole - Poughkeepsie NY, US
Steven K. Esser - San Jose CA, US
Myron D. Flickner - San Jose CA, US
Dharmendra S. Modha - San Jose CA, US
Hartmut Penner - San Jose CA, US
Jun Sawada - Austin TX, US
Brian Taba - Cupertino CA, US
International Classification:
G06F 9/30
G06F 9/32
G06N 3/02
Abstract:
A device for controlling neural inference processor cores is provided, including a compound instruction set architecture. The device comprises an instruction memory, which comprises a plurality of instructions for controlling a neural inference processor core. Each of the plurality of instructions comprises a control operation. The device further comprises a program counter. The device further comprises at least one loop counter register. The device is adapted to execute the plurality of instructions. Executing the plurality of instructions comprises: reading an instruction from the instruction memory based on a value of the program counter; updating the at least one loop counter register according to the control operation of the instruction; and updating the program counter according to the control operation of the instruction and a value of the at least one loop counter register.
Brian S Taba from San Jose, CA, age ~48 Get Report