Our architecture is scalable and can produce Co-processors from 0.5 tops to 64+ Tops by increasing number of agents.
Our architecture has many advanced architecture innovations such as layer fusion in H/W, intelligent memory access and on-chip memory optimizations that give higher utilization.
Learning Agent or Inference Agent
Each agent can be configured as learning agent or inference agents. Our architecture enables learning at the edge within constrained resources.
Based on a novel hardware that utilizes small tensor units and a novel instruction set architecture that minimizes overhead and increases efficiency, AlphaICs solution provides best-in-class performance.
AlphaICs software stack (AlphaRT) provides seamless environment for deploying neural networks onto the RAPTM. AlphaRT supports TensorFlow, and we plan to add support for other AI frameworks in the future.