NVDIA recently released it SHIELD TV, a product focussed on ‘upscaling’ TV media feeds to a comparable 4K experience using AI technology. The NVIDIA blog provides the following description of upscaling:
Putting on a pair of prescription glasses for the first time can feel like instantly snapping the world into focus. Suddenly, trees have distinct leaves. Fine wrinkles and freckles show up on faces. Footnotes in books and even street names on roadside signs become legible. Upscaling, converting lower resolution media to a higher resolution, offers a similar experience with new AI upscaling techniques, the enhanced visuals look more crisp and realistic than ever.
One third of televisions in US households are 4K TV’s, known as ultra-high definition; however much of the content currently available to watch on popular streaming is only available at lower resolutions. For example, 1080p images, known as full HD, have just a quarter of the pixels in 4K images. To display a 1080p shot from edge to edge on a 4K screen, the picture has to be stretched to match the TV’s pixels.
What Is Basic Upscaling?
Basic upscaling is the simplest way of stretching a lower resolution image onto a larger display. Pixels from the lower resolution image are copied and repeated to fill out all the pixels of the higher resolution display. Filtering is applied to smooth the image and round out unwanted jagged edges that may become visible due to the stretching. The result is an image that fits on a 4K display, but can often appear muted or blurry.
What Is AI Upscaling?
Traditional upscaling starts with a low-resolution image and tries to improve its visual quality at higher resolutions. AI upscaling takes a different approach: Given a low-resolution image, a deep learning model predicts a high-resolution image that would downscale to look like the original, low-resolution image.
To predict the upscaled images with high accuracy, a neural network model must be trained on countless images. The deployed AI model can then take low-resolution video and produce incredible sharpness and enhanced details no traditional scaler can recreate. Edges look sharper, hair looks scruffier and landscapes pop with striking clarity.
AI Upscaling Now Being Applied to 3D Scanned Models
Artec 3D has just announces the successful development of a proprietary AI Engine that more than doubles the resolution of its handheld scanners to 0.2 mm in its newly released HD Mode. Artec 3D is the first company to utilize deep learning convolutional neural networks to reconstruct 3D surfaces and improve the quality of 3D models. With HD Mode, users can create exceptionally accurate, low-noise scans of smaller, more detailed objects with complex surfaces, as well as large, intricate objects.
“With the help of in-house developed training techniques and CNNs (convolutional neural network’s), we’ve managed to squeeze more information from the same amount of data captured from our existing 3D scanners and get a much richer and denser representation of the scene being scanned,” said Gleb Gusev, CTO of Artec 3D. “Now we’re able to receive up to 64 times more measurements from the same scanners, which more than doubles the resolution of the final model and significantly decreases noise. Another advantage of our new approach is the much more accurate reconstruction of the surfaces this technique provides compared to standard algorithms.”
Artec 3D has a deep history in computer vision and AI, creating AI algorithms for its own 3D facial recognition devices, as well as for technology industry leaders. Most notably, Artec 3D’s team of AI experts worked with Apple to help develop its Face ID. Now, Artec 3D has leveraged its expertise to apply AI not only to 3D faces, but to 3D objects of any kind. The convolutional neural network powering Artec 3D’s AI Engine in Studio 15 software has been trained using millions of data points and hundreds of thousands of 3D models to ensure optimum performance in HD Mode.