Python让你成为AI 绘画大师简直太惊艳了!(附代码).docx

上传人:安*** 文档编号:73274170 上传时间:2023-02-17 格式:DOCX 页数:14 大小:20.44KB
返回 下载 相关 举报
Python让你成为AI 绘画大师简直太惊艳了!(附代码).docx_第1页
第1页 / 共14页
Python让你成为AI 绘画大师简直太惊艳了!(附代码).docx_第2页
第2页 / 共14页
点击查看更多>>
资源描述

《Python让你成为AI 绘画大师简直太惊艳了!(附代码).docx》由会员分享,可在线阅读,更多相关《Python让你成为AI 绘画大师简直太惊艳了!(附代码).docx(14页珍藏版)》请在taowenge.com淘文阁网|工程机械CAD图纸|机械工程制图|CAD装配图下载|SolidWorks_CaTia_CAD_UG_PROE_设计图分享下载上搜索。

1、Python让你成为AI绘画大师,简直太惊艳了!(附代码)|李秋键责编|李雪敬头图|CSDN下载自视觉中国引言基于前段时间我在CSDN上创作的文章“CylcleGAN人脸转卡通图的缺乏今天给大众共享一个更加完美的绘制卡通的工程“LearningtoCartoonizeUsingWhite-boxCartoonRepresentations。首先阐述下这个工程相对之前共享的卡通化的优势1、普遍适用性相对于原来人脸转卡通而言这个工程可以针对任意的图片进展卡通化转换不再局限于必须是人脸图片或者一定尺寸2、卡通化效果更好。详细效果如下列图可见其主要原理仍然是基于GAN网络但主要三个白盒分别对图像的构造

2、、外表以及纹理进展处理最后得到了优于其他方法的图像转化方法CartoonGAN。而今天我们就将借助论文所共享的源代码构建模型创立自己需要的人物运动。详细流程如下。实验前的准备首先我们使用的python版本是3.6.5所用到的模块如下argparse模块用来定义命令行输入参数指令。Utils是将其中常用功能封装成为接口。numpy模块用来处理矩阵运算。Tensorflow模块创立模型网络训练测试等。tqdm是显示循环的进度条的库。网络模型的定义以及训练因为不同的卡通风格需要特定任务的假设或者先验知识来开发对应的算法去分别处理。例如一些卡通工作更关注全局色彩线条轮廓是次要问题。或者是稀疏干净的颜色

3、块在艺术表达中占据主导地位。但是针对不同的需求常见模型无法有效的实现卡通化效果。故在文章中主要通过分别处理外表、构造以及纹理表示来解决这个问题1首先是网络层的定义1.1定义resblock保证在resblock的输入前后通道数发生变化时,可以保证shortcut以及普通的output的channel一致,这样就能直接相加了。defresblock(inputs,out_channel32,nameresblock):withtf.variable_scope(name):xslim.convolution2d(inputs,out_channel,3,3,activation_fnNone,s

4、copeconv1)xtf.nn.leaky_relu(x)xslim.convolution2d(x,out_channel,3,3,activation_fnNone,scopeconv2)returnxinputs1.2定义生成器函数defgenerator(inputs,channel32,num_blocks4,namegenerator,reuseFalse):withtf.variable_scope(name,reusereuse):xslim.convolution2d(inputs,channel,7,7,activation_fnNone)xtf.nn.leaky_rel

5、u(x)xslim.convolution2d(x,channel*2,3,3,stride2,activation_fnNone)xslim.convolution2d(x,channel*2,3,3,activation_fnNone)xtf.nn.leaky_relu(x)xslim.convolution2d(x,channel*4,3,3,stride2,activation_fnNone)xslim.convolution2d(x,channel*4,3,3,activation_fnNone)xtf.nn.leaky_relu(x)foridxinrange(num_blocks

6、):xresblock(x,out_channelchannel*4,nameblock_.format(idx)xslim.conv2d_transpose(x,channel*2,3,3,stride2,activation_fnNone)xslim.convolution2d(x,channel*2,3,3,activation_fnNone)xtf.nn.leaky_relu(x)xslim.conv2d_transpose(x,channel,3,3,stride2,activation_fnNone)xslim.convolution2d(x,channel,3,3,activat

7、ion_fnNone)xtf.nn.leaky_relu(x)xslim.convolution2d(x,3,7,7,activation_fnNone)#xtf.clip_by_value(x,-0.999999,0.999999)returnxdefunet_generator(inputs,channel32,num_blocks4,namegenerator,reuseFalse):withtf.variable_scope(name,reusereuse):x0slim.convolution2d(inputs,channel,7,7,activation_fnNone)x0tf.n

8、n.leaky_relu(x0)x1slim.convolution2d(x0,channel,3,3,stride2,activation_fnNone)x1tf.nn.leaky_relu(x1)x1slim.convolution2d(x1,channel*2,3,3,activation_fnNone)x1tf.nn.leaky_relu(x1)x2slim.convolution2d(x1,channel*2,3,3,stride2,activation_fnNone)x2tf.nn.leaky_relu(x2)x2slim.convolution2d(x2,channel*4,3,

9、3,activation_fnNone)x2tf.nn.leaky_relu(x2)foridxinrange(num_blocks):x2resblock(x2,out_channelchannel*4,nameblock_.format(idx)x2slim.convolution2d(x2,channel*2,3,3,activation_fnNone)x2tf.nn.leaky_relu(x2)h1,w1tf.shape(x2)1,tf.shape(x2)2x3tf.image.resize_bilinear(x2,(h1*2,w1*2)x3slim.convolution2d(x3x

10、1,channel*2,3,3,activation_fnNone)x3tf.nn.leaky_relu(x3)x3slim.convolution2d(x3,channel,3,3,activation_fnNone)x3tf.nn.leaky_relu(x3)h2,w2tf.shape(x3)1,tf.shape(x3)2x4tf.image.resize_bilinear(x3,(h2*2,w2*2)x4slim.convolution2d(x4x0,channel,3,3,activation_fnNone)x4tf.nn.leaky_relu(x4)x4slim.convolutio

11、n2d(x4,3,7,7,activation_fnNone)#x4tf.clip_by_value(x4,-1,1)returnx41.3外表构造等定义defdisc_bn(x,scale1,channel32,is_trainingTrue,namediscriminator,patchTrue,reuseFalse):withtf.variable_scope(name,reusereuse):foridxinrange(3):xslim.convolution2d(x,channel*2*idx,3,3,stride2,activation_fnNone)xslim.batch_nor

12、m(x,is_trainingis_training,centerTrue,scaleTrue)xtf.nn.leaky_relu(x)xslim.convolution2d(x,channel*2*idx,3,3,activation_fnNone)xslim.batch_norm(x,is_trainingis_training,centerTrue,scaleTrue)xtf.nn.leaky_relu(x)ifpatchTrue:xslim.convolution2d(x,1,1,1,activation_fnNone)else:xtf.reduce_mean(x,axis1,2)xs

13、lim.fully_connected(x,1,activation_fnNone)returnxdefdisc_sn(x,scale1,channel32,patchTrue,namediscriminator,reuseFalse):withtf.variable_scope(name,reusereuse):foridxinrange(3):xlayers.conv_spectral_norm(x,channel*2*idx,3,3,stride2,nameconv_1.format(idx)xtf.nn.leaky_relu(x)xlayers.conv_spectral_norm(x

14、,channel*2*idx,3,3,nameconv_2.format(idx)xtf.nn.leaky_relu(x)ifpatchTrue:xlayers.conv_spectral_norm(x,1,1,1,nameconv_out.format(idx)else:xtf.reduce_mean(x,axis1,2)xslim.fully_connected(x,1,activation_fnNone)returnxdefdisc_ln(x,channel32,is_trainingTrue,namediscriminator,patchTrue,reuseFalse):withtf.

15、variable_scope(name,reusereuse):foridxinrange(3):xslim.convolution2d(x,channel*2*idx,3,3,stride2,activation_fnNone)xtf.contrib.layers.layer_norm(x)xtf.nn.leaky_relu(x)xslim.convolution2d(x,channel*2*idx,3,3,activation_fnNone)xtf.contrib.layers.layer_norm(x)xtf.nn.leaky_relu(x)ifpatchTrue:xslim.convo

16、lution2d(x,1,1,1,activation_fnNone)else:xtf.reduce_mean(x,axis1,2)xslim.fully_connected(x,1,activation_fnNone)returnx2模型的训练使用clip_by_value应用自适应着色在最后一层的网络中因为它不是很稳定。为了稳定再现我们的结果请使用power1.0然后首先在network.py中注释clip_by_value函数。deftrain(args):input_phototf.placeholder(tf.float32,args.batch_size,args.patch_si

17、ze,args.patch_size,3)input_superpixeltf.placeholder(tf.float32,args.batch_size,args.patch_size,args.patch_size,3)input_cartoontf.placeholder(tf.float32,args.batch_size,args.patch_size,args.patch_size,3)outputnetwork.unet_generator(input_photo)outputguided_filter(input_photo,output,r1)blur_fakeguided

18、_filter(output,output,r5,eps2e-1)blur_cartoonguided_filter(input_cartoon,input_cartoon,r5,eps2e-1)gray_fake,gray_cartoonutils.color_shift(output,input_cartoon)d_loss_gray,g_loss_grayloss.lsgan_loss(network.disc_sn,gray_cartoon,gray_fake,scale1,patchTrue,namedisc_gray)d_loss_blur,g_loss_blurloss.lsga

19、n_loss(network.disc_sn,blur_cartoon,blur_fake,scale1,patchTrue,namedisc_blur)vgg_modelloss.Vgg19(vgg19_no_fc.npy)vgg_photovgg_model.build_conv4_4(input_photo)vgg_outputvgg_model.build_conv4_4(output)vgg_superpixelvgg_model.build_conv4_4(input_superpixel)h,w,cvgg_photo.get_shape().as_list()1:photo_lo

20、sstf.reduce_mean(tf.losses.absolute_difference(vgg_photo,vgg_output)/(h*w*c)superpixel_losstf.reduce_mean(tf.losses.absolute_difference(vgg_superpixel,vgg_output)/(h*w*c)recon_lossphoto_losssuperpixel_losstv_lossloss.total_variation_loss(output)g_loss_total1e4*tv_loss1e-1*g_loss_blurg_loss_gray2e2*r

21、econ_lossd_loss_totald_loss_blurd_loss_grayall_varstf.trainable_variables()gene_varsvarforvarinall_varsifgeneinvar.namedisc_varsvarforvarinall_varsifdiscinvar.nametf.summary.scalar(tv_loss,tv_loss)tf.summary.scalar(photo_loss,photo_loss)tf.summary.scalar(superpixel_loss,superpixel_loss)tf.summary.sc

22、alar(recon_loss,recon_loss)tf.summary.scalar(d_loss_gray,d_loss_gray)tf.summary.scalar(g_loss_gray,g_loss_gray)tf.summary.scalar(d_loss_blur,d_loss_blur)tf.summary.scalar(g_loss_blur,g_loss_blur)tf.summary.scalar(d_loss_total,d_loss_total)tf.summary.scalar(g_loss_total,g_loss_total)update_opstf.get_

23、collection(tf.GraphKeys.UPDATE_OPS)withtf.control_dependencies(update_ops):g_optimtf.train.AdamOptimizer(args.adv_train_lr,beta10.5,beta20.99).minimize(g_loss_total,var_listgene_vars)d_optimtf.train.AdamOptimizer(args.adv_train_lr,beta10.5,beta20.99).minimize(d_loss_total,var_listdisc_vars)gpu_optio

24、nstf.GPUOptions(per_process_gpu_memory_fractionargs.gpu_fraction)sesstf.Session(configtf.ConfigProto(gpu_optionsgpu_options)train_writertf.summary.FileWriter(args.save_dir/train_log)summary_optf.summary.merge_all()savertf.train.Saver(var_listgene_vars,max_to_keep20)withtf.device(/device:GPU:0):sess.

25、run(tf.global_variables_initializer()saver.restore(sess,tf.train.latest_checkpoint(pretrain/saved_models)face_photo_dirdataset/photo_faceface_photo_listutils.load_image_list(face_photo_dir)scenery_photo_dirdataset/photo_sceneryscenery_photo_listutils.load_image_list(scenery_photo_dir)face_cartoon_di

26、rdataset/cartoon_faceface_cartoon_listutils.load_image_list(face_cartoon_dir)scenery_cartoon_dirdataset/cartoon_sceneryscenery_cartoon_listutils.load_image_list(scenery_cartoon_dir)fortotal_iterintqdm(range(args.total_iter):ifnp.mod(total_iter,5)0:photo_batchutils.next_batch(face_photo_list,args.bat

27、ch_size)cartoon_batchutils.next_batch(face_cartoon_list,args.batch_size)else:photo_batchutils.next_batch(scenery_photo_list,args.batch_size)cartoon_batchutils.next_batch(scenery_cartoon_list,args.batch_size)inter_outsess.run(output,feed_dictinput_photo:photo_batch,input_superpixel:photo_batch,input_

28、cartoon:cartoon_batch)ifargs.use_enhance:superpixel_batchutils.selective_adacolor(inter_out,power1.2)else:superpixel_batchutils.simple_superpixel(inter_out,seg_num200)_,g_loss,r_losssess.run(g_optim,g_loss_total,recon_loss,feed_dictinput_photo:photo_batch,input_superpixel:superpixel_batch,input_cart

29、oon:cartoon_batch)_,d_loss,train_infosess.run(d_optim,d_loss_total,summary_op,feed_dictinput_photo:photo_batch,input_superpixel:superpixel_batch,input_cartoon:cartoon_batch)train_writer.add_summary(train_info,total_iter)ifnp.mod(total_iter1,50)0:print(Iter:,d_loss:,g_loss:,recon_loss:.format(total_i

30、ter,d_loss,g_loss,r_loss)ifnp.mod(total_iter1,500)0:saver.save(sess,args.save_dir/saved_models/model,write_meta_graphFalse,global_steptotal_iter)photo_faceutils.next_batch(face_photo_list,args.batch_size)cartoon_faceutils.next_batch(face_cartoon_list,args.batch_size)photo_sceneryutils.next_batch(sce

31、nery_photo_list,args.batch_size)cartoon_sceneryutils.next_batch(scenery_cartoon_list,args.batch_size)result_facesess.run(output,feed_dictinput_photo:photo_face,input_superpixel:photo_face,input_cartoon:cartoon_face)result_scenerysess.run(output,feed_dictinput_photo:photo_scenery,input_superpixel:pho

32、to_scenery,input_cartoon:cartoon_scenery)utils.write_batch_image(result_face,args.save_dir/images,str(total_iter)_face_result.jpg,4)utils.write_batch_image(photo_face,args.save_dir/images,str(total_iter)_face_photo.jpg,4)utils.write_batch_image(result_scenery,args.save_dir/images,str(total_iter)_sce

33、nery_result.jpg,4)utils.write_batch_image(photo_scenery,args.save_dir/images,str(total_iter)_scenery_photo.jpg,4)模型的测试以及使用1加载图片的尺寸自动处理以及导向滤波定义defresize_crop(image):h,w,cnp.shape(image)ifmin(h,w)720:ifhw:h,wint(720*h/w),720else:h,w720,int(720*w/h)imagecv2.resize(image,(w,h),interpolationcv2.INTER_ARE

34、A)h,w(h/8)*8,(w/8)*8imageimage:h,:w,:returnimagedeftf_box_filter(x,r):k_sizeint(2*r1)chx.get_shape().as_list()-1weight1/(k_size*2)box_kernelweight*np.ones(k_size,k_size,ch,1)box_kernelnp.array(box_kernel).astype(np.float32)outputtf.nn.depthwise_conv2d(x,box_kernel,1,1,1,1,SAME)returnoutputdefguided_

35、filter(x,y,r,eps1e-2):x_shapetf.shape(x)#y_shapetf.shape(y)Ntf_box_filter(tf.ones(1,x_shape1,x_shape2,1),dtypex.dtype),r)mean_xtf_box_filter(x,r)/Nmean_ytf_box_filter(y,r)/Ncov_xytf_box_filter(x*y,r)/N-mean_x*mean_yvar_xtf_box_filter(x*x,r)/N-mean_x*mean_xAcov_xy/(var_xeps)bmean_y-A*mean_xmean_Atf_b

36、ox_filter(A,r)/Nmean_btf_box_filter(b,r)/Noutputmean_A*xmean_breturnoutputdeffast_guided_filter(lr_x,lr_y,hr_x,r1,eps1e-8):#assertlr_x.shape.ndims4andlr_y.shape.ndims4andhr_x.shape.ndims4lr_x_shapetf.shape(lr_x)#lr_y_shapetf.shape(lr_y)hr_x_shapetf.shape(hr_x)Ntf_box_filter(tf.ones(1,lr_x_shape1,lr_

37、x_shape2,1),dtypelr_x.dtype),r)mean_xtf_box_filter(lr_x,r)/Nmean_ytf_box_filter(lr_y,r)/Ncov_xytf_box_filter(lr_x*lr_y,r)/N-mean_x*mean_yvar_xtf_box_filter(lr_x*lr_x,r)/N-mean_x*mean_xAcov_xy/(var_xeps)bmean_y-A*mean_xmean_Atf.image.resize_images(A,hr_x_shape1:3)mean_btf.image.resize_images(b,hr_x_s

38、hape1:3)outputmean_A*hr_xmean_breturnoutput2卡通化函数定义defcartoonize(load_folder,save_folder,model_path):input_phototf.placeholder(tf.float32,1,None,None,3)network_outnetwork.unet_generator(input_photo)final_outguided_filter.guided_filter(input_photo,network_out,r1,eps5e-3)all_varstf.trainable_variables

39、()gene_varsvarforvarinall_varsifgeneratorinvar.namesavertf.train.Saver(var_listgene_vars)configtf.ConfigProto()config.gpu_options.allow_growthTruesesstf.Session(configconfig)sess.run(tf.global_variables_initializer()saver.restore(sess,tf.train.latest_checkpoint(model_path)name_listos.listdir(load_fo

40、lder)fornameintqdm(name_list):try:load_pathos.path.join(load_folder,name)save_pathos.path.join(save_folder,name)imagecv2.imread(load_path)imageresize_crop(image)batch_imageimage.astype(np.float32)/127.5-1batch_imagenp.expand_dims(batch_image,axis0)outputsess.run(final_out,feed_dictinput_photo:batch_

41、image)output(np.squeeze(output)1)*127.5outputnp.clip(output,0,255).astype(np.uint8)cv2.imwrite(save_path,output)except:print(cartoonizefailed.format(load_path)3模型调用model_pathsaved_modelsload_foldertest_imagessave_foldercartoonized_imagesifnotos.path.exists(save_folder):os.mkdir(save_folder)cartooniz

42、e(load_folder,save_folder,model_path)4训练代码的使用test_code文件夹中运行pythoncartoonize.py。生成图片在cartoonized_images文件夹里效果如下总结将输入图像通过导向滤波器处理得到外表表示的结果然后通过超像素处理得到构造表示的结果通过随机色彩变幻得到纹理表示的结果卡通图像也就是做这样的处理。随后将GAN生成器产生的fake_image分别于上述表示结果做损失。其中纹理表示与外表表示通过判别器得到损失fake_image的构造表示与fake_image输入图像与fake_image分别通过vgg19网络抽取特征进展损失的计算。完好代码链接s:/pan.baidu/s/10YklnSRIw_mc6W4ovlP3uw提取码pluq简介李秋键CSDNboke专家CSDN达人课。硕士在读于中国矿业大学开发有taptap竞赛获奖等。更多精彩推荐程序员删库被判6年度公司损失近亿云原生时代怎样打造平安防线8次迭代5大晋级旷视天元1.0预览版正式发布曾是谷歌程序员抛下百万年度薪创业4年度成就7亿用户今身价百亿首次在手机端不牺牲准确率实现BERT实时推理比TensorFlow-Lite快近8倍每帧只需45msServiceMesh在超大规模场景下的落地挑战比特币背后的技术是否已成为科技领军代表点共享点点赞点在看CSDN资讯

展开阅读全文
相关资源
相关搜索

当前位置:首页 > 技术资料 > 工程图纸

本站为文档C TO C交易模式,本站只提供存储空间、用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。本站仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知淘文阁网,我们立即给予删除!客服QQ:136780468 微信:18945177775 电话:18904686070

工信部备案号:黑ICP备15003705号© 2020-2023 www.taowenge.com 淘文阁