A team of researchers at Pittsburgh’s Carnegie Mellon University in the US have devised a system capable of “seeing” the shapes and movements of humans in a room, using only Wi-Fi routers.
The scientists published a preprint paper on their findings on arXiv last month.
It explained how they developed a deep neural network that could map the phase and amplitude of Wi-Fi signals sent and received by routers to create UV-mapped coordinates of 24 human regions.
UV mapping is a technique for modelling 3D objects onto a two-dimensional coordinate system.
“The results of the study reveal that our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches, by utilising Wi-Fi signals as the only input,” the researchers said.
The software used to map the pixels on a human body was DensePost, co-developed by researchers from London and Facebook’s AI division.
The research paper explained that three components were used to produce the coordinates from Wi-Fi signals.
“First, the raw channel state information (CSI) signals are cleaned by amplitude and phase sanitisation,” it stated.
“Then, a two-branch encoder-decoder network performs domain translation from sanitised CSI samples to 2D feature maps that resemble images.”
“The 2D features are then fed to a modified DensePose-RCNN architecture to estimate the UV map, a representation of the dense correspondence between 2D and 3D humans.”
Finally, to improve the training of the Wi-Fi input network, the researchers used transfer learning to minimise the differences between the multi-level feature maps produced by images and those created by Wi-Fi signals before training its main network.
The researchers argued the technology paved the way for low-cost, broadly accessible, and privacy-preserving algorithms for human sensing.
Its uses could include monitoring the well-being of the elderly or children when no one else is in a room or identifying suspicious behaviours at home, like the presence of an intruder.
Vice reported that a number of systems had been developed for “seeing” people through walls without using advanced RGB cameras or expensive LiDAR equipment.
MIT researchers were previously able to use cell phone signals for detecting people through walls, while another team at the institute used Wi-Fi to identify people in another room and produce their bodies and movements into stick figures.