Bangalore, October 19, 2022: IESA member company, d-Matrix announced at the IESA Vision Summit, the opening of its R&D center in the presence of IESA chairman Vivek Tyagi. Founded by semiconductor veterans from Silicon Valley, Sid Sheth, and Sudeep Bhoja, d-Matrix is building a one-of-a-kind datacentre, AI inferencing platform using a software-first approach coupled with path-breaking hardware innovations in the areas of in-memory computing (IMC) and chiplet level scale-out interconnects. d-Matrix has tackled the physics of memory-compute integration using innovative ML tools, software and algorithms, and circuit techniques, solving the final frontier in AI compute efficiency.
If you are following the evolution of deep learning-powered AI, the renaissance of Generative AI, and the next disruption in computer vision, you likely know it’s all about Transformer based models. They are powering neural nets with billions to trillions of parameters and existing silicon architectures (including the plethora of AI accelerators) are struggling to vary degrees to keep up with exploding model sizes and their performance requirements.
“d-Matrix is addressing the exploding need for more AI compute head-on by developing a fully digital in-memory computing accelerator for AI inference that is highly efficient and optimized for the computational patterns in Transformers,” said Sid Sheth, Co-founder, President & CEO, d-Matrix”
At d-Matrix, the team is actively working to build the world’s first inference-focused computing platform for the age of Transformer AI. Transformer-based model architectures are creating a whole new class of models called Generative models that are powering services like language generation, and code generation, d-Matrix has already completed the development of its Nighthawk and Jayhawk platforms that demonstrate the benefits of digital in-memory computing and chiplets for inference compute. The company is now developing the Corsair platform that combines the elements with an open, mature, end-to-end software stack that can be deployed frictionlessly across the cloud to client computing.
“The Corsair chiplet platform provides efficient high bandwidth communication fabric balancing compute, memory & networking to serve generative Transformer AI models like GPT3 & Stable diffusion models at scale,” said Sudeep Bhoja, Co-founder and CTO, d-Matrix
d-Matrix recently raised Series A funding of $44M from premium investors: Microsoft M12, Playground Global, SK Hynix, and Marvell.
d-Matrix already has R&D centers in Santa Clara and Sydney. When asked about the India center, Sid Sheth said, “We are very excited about our India center. We already have a world-class core team now in design and verification here. The talent in India is just amazing. The India team is going to play a critical role in our growth”
d-Matrix recently roped in Dr. Pradip Thaker to head the India team as VP and Country Head. “We are thrilled to have Pradip join our India team. He brings decades of experience in system-on-chip development” said Sid. Before d-Matrix, Pradip was VP of Engineering and Country Head for the Marvell India team, where he was responsible for a force of 1500+ employees. When asked why he chose d-Matrix, Pradip said, “The opportunity to work on ground-breaking AI technology with the world-class engineering and leadership team at d-Matrix was too exciting to pass up,” said Dr. Thaker. The team has a stellar track record of developing and commercializing silicon systems at scale and has attracted top-tier talent across the industry. We’re going to see AI transform lives in the next decade and d-Matrix is going to play a key part in this revolutionary technology adoption.