Researchers at the Max Planck Institute for Informatics have developed a new AI tool called DragGAN. It allows users to use 3-D manipulation on 2-D images through a point-based manipulation system.
The tool uses GAN (generative adversarial networks) technology, a deep neural network framework that can learn from a set of training data and generate new data with the same characteristics as the training data. The project is led by Dr. Xingang Pan, and their goal was to create a system that is both precise and flexible, while remaining easy to use. You can read more about DragGAN As described in their research paper.
The GAN is controlled by a point-based manipulation system. The user interacts with the image by making several handle points and then target points. As shown in Fig. 1, the red points are the handle points and the blue points are the target points. The GAN drags the red point to the blue point, generating the new image. You can also apply a light gray mask to make specific target areas.
After the user applies the handles, all they need to do is click the start and stop buttons and the GAN will drag the handle points to the target points to manipulate the image. Once they are finished, they simply click the save button.
Once finished, users only need to click the save button to complete the process. It is worth noting that DragGAN is currently only available for non-commercial use.
The code for DragGAN can be found on Dr. Xingang Pan's Github page. This new tool is poised to revolutionize the field of image manipulation, offering precise and flexible 3-D manipulation unlike anything else currently available.