I tried to train a machine learning algorithm, named pix2pix from source code this week. Following is the working process.
Pix2pix is a machine learning algorithm where it compares the input image with the true image to modify the accuracy value to generate an image that is similar to the true image based on the input image.
(picture from https://www.tensorflow.org/tutorials/generative/pix2pix)
I use the training dataset, named matting human datasets, from Kaggle. At first, I tried to use openframeworks to get the input image showed in figure 1 (a) and 2 (a). the input images are finding the boundaries of the true images.
(figure 1, left-a, right-b, the right image is from Kaggle)
(figure 2, left – a, right-b, the right image is from Kaggle)
I tried to slightly modify several parameters in the original machine learning algorithms and trained the model for more than 12 hours. Figure 3 and figure 4 show the output consequences in real-time.
(figure 3, left-a, right - b)
(figure 4, left-a, right-b)
Before I tried this practice, I found most of the machine learning models, which would like to generate human images, were based on true human images. Hence, I was thinking what will happen if I use the input images, which only contain human boundaries, a few color information, to train the machine learning model and to see how they reshape our human shapes. The consequence is interesting. Although I only support a few color information in the input image, it still can output human-like images.
Ref:
1.Pix2pix: https://www.tensorflow.org/tutorials/generative/pix2pix
2.This dataset is currently the largest portrait matting dataset, containing 34,427 images and corresponding matting results. The data set was marked by the high quality of Beijing Play Star Convergence Technology Co., Ltd., and the portrait soft segmentation model trained using this data set has been commercialized. The original images in the dataset are from Flickr, Baidu, and Taobao. After face detection and area cropping, a half-length portrait of 600*800 was generated.
Opmerkingen