Engineers build LEGO-like artificial intelligence chip

Imagine a more sustainable future, where cell phones, smartwatches and other wearable devices don’t have to be shelved or thrown away for a newer model. Instead, they could be upgraded with the latest sensors and processors that would snap onto a device’s internal chip, like LEGO bricks incorporated into an existing build. Such reconfigurable chipware could keep devices up to date while reducing our electronic waste.

Now, MIT engineers have taken a step toward that modular vision with a LEGO-esque design for a stackable, reconfigurable artificial intelligence chip.

The design includes alternating layers of sensing and processing elements, along with light-emitting diodes (LED) that allow the layers of the chip to communicate optically. Other modular chip designs use conventional wiring to pass signals between layers. Such intricate connections are difficult, if not impossible, to break and rewire, making such stackable designs impossible to reconfigure.

The MIT design uses light rather than physical wires to transmit information through the chip. The chip can therefore be reconfigured, with layers that can be swapped or stacked, for example to add new sensors or updated processors.

“You can add as many computer layers and sensors as you want, such as for light, pressure and even smell,” says MIT postdoc Jihoon Kang. “We call this a LEGO-like reconfigurable AI chip because it can be expanded indefinitely depending on the combination of layers.”

The researchers are keen to apply the design to edge computing devices — self-sustaining sensors and other electronics that operate independently of centralized or distributed resources such as supercomputers or cloud-based computing.

“As we enter the era of the Internet of Things based on sensor networks, the demand for multifunctional edge computing devices will increase dramatically,” said Jeehwan Kim, an associate professor of mechanical engineering at MIT. “Our proposed hardware architecture will provide high versatility of edge computing in the future.”

The team’s results are published today in Nature Electronics. In addition to Kim and Kang, MIT authors include co-first authors Chanyeol Choi, Hyunseok Kim and Min-Kyu Song, and contributing authors Hanwool Yeon, Celesta Chang, Jun Min Suh, Jiho Shin, Kuangye Lu, Bo-In Park, Yeongin Kim , Han Eol Lee, Doyoon Lee, Subeen Pang, Sang-Hoon Bae, Hun S. Kum, and Peng Lin, along with collaborators from Harvard University, Tsinghua University, Zhejiang University, and elsewhere.

Light the way

The team’s design is currently configured to perform basic image recognition tasks. It does this through a layering of image sensors, LEDs and processors made from artificial synapses — arrays of memory resistors, or “memristors,” which the team said. previously developed, which function together as a physical neural network, or ‘brain-on-a-chip’. Any array can be trained to process and classify signals directly on a chip, without the need for external software or an internet connection.

In their new chip design, the researchers linked image sensors to artificial synapse arrays, each of which they trained to recognize certain letters — in this case M, I and T. While a conventional approach would be to relay the signals from a sensor to a processor via physical wires, the team instead fabricated an optical system between each sensor and artificial synapse array to allow communication between the layers, without the need for a physical connection.

“Other chips are physically wired through metal, which makes them difficult to rewire and redesign, so you’d have to make a new chip if you wanted to add a new function,” said MIT postdoc Hyunseok Kim. “We’ve replaced that physical wire connection with an optical communication system, which gives us the freedom to stack and add chips as we please.”

The team’s optical communication system consists of paired photo detectors and LEDs, each patterned with small pixels. Photo detectors form an image sensor to receive data and LEDs to transmit data to the next layer. When a signal (e.g. an image of a letter) reaches the image sensor, the image’s light pattern encodes a particular configuration of LED pixels, which in turn stimulate another layer of photodetectors, along with an artificial synapse array, which decodes the signal-based on the pattern and intensity of the incoming LED light.

Piling up

The team fabricated a single chip, with a computing core measuring about 4 square millimeters, or about the size of a piece of confetti. The chip is stacked with three image recognition “blocks”, each consisting of an image sensor, optical communication layer and artificial synapse array for classifying one of three letters, M, I or T. They then shined a pixelated image of random letters. on the chip and measured the electrical current each neural network array produced in response. (The greater the flow, the more likely the image is indeed the letter the particular array was trained to recognize.)

The team found that the chip correctly classified clear images of each letter, but it was less able to distinguish blurry images, such as between I and T. However, the researchers were able to quickly swap out the chip’s processing layer for a better one. “denoising” processor, and found that the chip accurately identified the images.

“We showed stackability, replaceability and the ability to insert a new function into the chip,” notes MIT postdoc Min-Kyu Song.

The researchers plan to add more detection and processing capabilities to the chip, and they see the applications as limitless.

“We can add layers to a cell phone’s camera so it can recognize more complex images, or turn them into healthcare monitors that can be embedded in wearable electronic skin,” offers Choi, who co-authored a previous project with Kim. “smart” skin for monitoring vital signs.

Another idea, he adds, is for modular chips, built into electronics, that consumers can choose to build with the latest sensor and processor “bricks.”

“We can make a common chip platform and each tier can be sold separately as a video game,” says Jeehwan Kim. “We can create different kinds of neural networks, such as for image or speech recognition, and let the customer choose what they want, and add it to an existing chip like a LEGO.”

This research was supported in part by the Ministry of Trade, Industry and Energy (MOTIE) of South Korea; the Korea Institute of Science and Technology (KIST); and the Samsung Global Research Outreach program.

/University statement. This material from the original organisation/author(s) may be of a point in time, edited for clarity, style and length. The views and opinions are those of the author(s). View full here

#Engineers #build #LEGOlike #artificial #intelligence #chip

Leave a Comment

Your email address will not be published. Required fields are marked *