0% found this document useful (0 votes)
81 views

Deforum Stable Diffusion - Ipynb

The document describes the setup and configuration for Deforum Stable Diffusion v0.7. It includes code to setup the environment, load the model, and define settings for animation parameters and other options. The settings allow control over animation mode, motion, unsharp masking, and color coherence for generating images and videos.

Uploaded by

Vishal Singh
Copyright
© © All Rights Reserved
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
81 views

Deforum Stable Diffusion - Ipynb

The document describes the setup and configuration for Deforum Stable Diffusion v0.7. It includes code to setup the environment, load the model, and define settings for animation parameters and other options. The settings allow control over animation mode, motion, unsharp masking, and color coherence for generating images and videos.

Uploaded by

Vishal Singh
Copyright
© © All Rights Reserved
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 12

{

"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "ByGXyiHZWM_q"
},
"source": [
"# **Deforum Stable Diffusion v0.7**\n",
"[Stable Diffusion](https://github.com/CompVis/stable-diffusion) by Robin
Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer and the
[Stability.ai](https://stability.ai/) Team. [K
Diffusion](https://github.com/crowsonkb/k-diffusion) by [Katherine Crowson]
(https://twitter.com/RiversHaveWings). Notebook by
[deforum](https://discord.gg/upmXXsrwZc)\n",
"\n",
"[Quick
Guide](https://docs.google.com/document/d/1RrQv7FntzOuLg4ohjRZPVL7iptIyBhwwbcEYEW2O
fcI/edit?usp=sharing) to Deforum v0.7"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "IJjzzkKlWM_s"
},
"outputs": [],
"source": [
"#@markdown **NVIDIA GPU**\n",
"import subprocess, os, sys\n",
"sub_p_res = subprocess.run(['nvidia-smi', '--query-
gpu=name,memory.total,memory.free', '--format=csv,noheader'],
stdout=subprocess.PIPE).stdout.decode('utf-8')\n",
"print(f\"{sub_p_res[:-1]}\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "UA8-efH-WM_t"
},
"source": [
"# Setup"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "vohUiWo-I2HQ"
},
"outputs": [],
"source": [
"#@markdown **Environment Setup**\n",
"import subprocess, time, gc, os, sys\n",
"\n",
"def setup_environment():\n",
" start_time = time.time()\n",
" print_subprocess = False\n",
" use_xformers_for_colab = True\n",
" try:\n",
" ipy = get_ipython()\n",
" except:\n",
" ipy = 'could not get_ipython'\n",
" if 'google.colab' in str(ipy):\n",
" print(\"..setting up environment\")\n",
"\n",
" # weird hack\n",
" import torch\n",
" \n",
" all_process = [\n",
" ['pip', 'install', 'omegaconf', 'einops==0.4.1', 'pytorch-
lightning==1.7.7', 'torchmetrics', 'transformers', 'safetensors', 'kornia'],\n",
" ['git', 'clone', 'https://github.com/deforum-art/deforum-
stable-diffusion'],\n",
" ['pip', 'install', 'accelerate', 'ftfy', 'jsonmerge',
'matplotlib', 'resize-right', 'timm', 'torchdiffeq','scikit-
learn','torchsde','open-clip-torch','numpngw'],\n",
" ]\n",
" for process in all_process:\n",
" running =
subprocess.run(process,stdout=subprocess.PIPE).stdout.decode('utf-8')\n",
" if print_subprocess:\n",
" print(running)\n",
" with open('deforum-stable-diffusion/src/k_diffusion/__init__.py',
'w') as f:\n",
" f.write('')\n",
" sys.path.extend([\n",
" 'deforum-stable-diffusion/',\n",
" 'deforum-stable-diffusion/src',\n",
" ])\n",
" if use_xformers_for_colab:\n",
"\n",
" print(\"..installing triton and xformers\")\n",
"\n",
" all_process = [['pip', 'install', 'triton==2.0.0.dev20221202',
'xformers==0.0.16rc424']]\n",
" for process in all_process:\n",
" running =
subprocess.run(process,stdout=subprocess.PIPE).stdout.decode('utf-8')\n",
" if print_subprocess:\n",
" print(running)\n",
" else:\n",
" sys.path.extend([\n",
" 'src'\n",
" ])\n",
" end_time = time.time()\n",
" print(f\"..environment set up in {end_time-start_time:.0f} seconds\")\
n",
" return\n",
"\n",
"setup_environment()\n",
"\n",
"import torch\n",
"import random\n",
"import clip\n",
"from IPython import display\n",
"from types import SimpleNamespace\n",
"from helpers.save_images import get_output_folder\n",
"from helpers.settings import load_args\n",
"from helpers.render import render_animation, render_input_video,
render_image_batch, render_interpolation\n",
"from helpers.model_load import make_linear_decode, load_model,
get_model_output_paths\n",
"from helpers.aesthetics import load_aesthetics_model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "0D2HQO-PWM_t"
},
"outputs": [],
"source": [
"#@markdown **Path Setup**\n",
"\n",
"def Root():\n",
" models_path = \"models\" #@param {type:\"string\"}\n",
" configs_path = \"configs\" #@param {type:\"string\"}\n",
" output_path = \"outputs\" #@param {type:\"string\"}\n",
" mount_google_drive = True #@param {type:\"boolean\"}\n",
" models_path_gdrive = \"/content/drive/MyDrive/AI/models\" #@param
{type:\"string\"}\n",
" output_path_gdrive = \"/content/drive/MyDrive/AI/StableDiffusion\"
#@param {type:\"string\"}\n",
"\n",
" #@markdown **Model Setup**\n",
" map_location = \"cuda\" #@param [\"cpu\", \"cuda\"]\n",
" model_config = \"v1-inference.yaml\" #@param [\"custom\",\"v2-
inference.yaml\",\"v2-inference-v.yaml\",\"v1-inference.yaml\"]\n",
" model_checkpoint = \"Protogen_V2.2.ckpt\" #@param [\"custom\",\"v2-
1_768-ema-pruned.ckpt\",\"v2-1_512-ema-pruned.ckpt\",\"768-v-ema.ckpt\",\"512-base-
ema.ckpt\",\"Protogen_V2.2.ckpt\",\"v1-5-pruned.ckpt\",\"v1-5-pruned-
emaonly.ckpt\",\"sd-v1-4-full-ema.ckpt\",\"sd-v1-4.ckpt\",\"sd-v1-3-full-
ema.ckpt\",\"sd-v1-3.ckpt\",\"sd-v1-2-full-ema.ckpt\",\"sd-v1-2.ckpt\",\"sd-v1-1-
full-ema.ckpt\",\"sd-v1-1.ckpt\", \"robo-diffusion-v1.ckpt\",\"wd-v1-3-
float16.ckpt\"]\n",
" custom_config_path = \"\" #@param {type:\"string\"}\n",
" custom_checkpoint_path = \"\" #@param {type:\"string\"}\n",
" return locals()\n",
"\n",
"root = Root()\n",
"root = SimpleNamespace(**root)\n",
"\n",
"root.models_path, root.output_path = get_model_output_paths(root)\n",
"root.model, root.device = load_model(root, load_on_run_all=True,
check_sha256=True, map_location=root.map_location)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6JxwhBwtWM_t"
},
"source": [
"# Settings"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "E0tJVYA4WM_u"
},
"outputs": [],
"source": [
"def DeforumAnimArgs():\n",
"\n",
" #@markdown ####**Animation:**\n",
" animation_mode = 'None' #@param ['None', '2D', '3D', 'Video Input',
'Interpolation'] {type:'string'}\n",
" max_frames = 1000 #@param {type:\"number\"}\n",
" border = 'replicate' #@param ['wrap', 'replicate'] {type:'string'}\n",
"\n",
" #@markdown ####**Motion Parameters:**\n",
" angle = \"0:(0)\"#@param {type:\"string\"}\n",
" zoom = \"0:(1.04)\"#@param {type:\"string\"}\n",
" translation_x = \"0:(10*sin(2*3.14*t/10))\"#@param {type:\"string\"}\
n",
" translation_y = \"0:(0)\"#@param {type:\"string\"}\n",
" translation_z = \"0:(10)\"#@param {type:\"string\"}\n",
" rotation_3d_x = \"0:(0)\"#@param {type:\"string\"}\n",
" rotation_3d_y = \"0:(0)\"#@param {type:\"string\"}\n",
" rotation_3d_z = \"0:(0)\"#@param {type:\"string\"}\n",
" flip_2d_perspective = False #@param {type:\"boolean\"}\n",
" perspective_flip_theta = \"0:(0)\"#@param {type:\"string\"}\n",
" perspective_flip_phi = \"0:(t%15)\"#@param {type:\"string\"}\n",
" perspective_flip_gamma = \"0:(0)\"#@param {type:\"string\"}\n",
" perspective_flip_fv = \"0:(53)\"#@param {type:\"string\"}\n",
" noise_schedule = \"0: (0.02)\"#@param {type:\"string\"}\n",
" strength_schedule = \"0: (0.65)\"#@param {type:\"string\"}\n",
" contrast_schedule = \"0: (1.0)\"#@param {type:\"string\"}\n",
" hybrid_video_comp_alpha_schedule = \"0:(1)\" #@param
{type:\"string\"}\n",
" hybrid_video_comp_mask_blend_alpha_schedule = \"0:(0.5)\" #@param
{type:\"string\"}\n",
" hybrid_video_comp_mask_contrast_schedule = \"0:(1)\" #@param
{type:\"string\"}\n",
" hybrid_video_comp_mask_auto_contrast_cutoff_high_schedule = \"0:
(100)\" #@param {type:\"string\"}\n",
" hybrid_video_comp_mask_auto_contrast_cutoff_low_schedule = \"0:(0)\"
#@param {type:\"string\"}\n",
"\n",
" #@markdown ####**Unsharp mask (anti-blur) Parameters:**\n",
" kernel_schedule = \"0: (5)\"#@param {type:\"string\"}\n",
" sigma_schedule = \"0: (1.0)\"#@param {type:\"string\"}\n",
" amount_schedule = \"0: (0.2)\"#@param {type:\"string\"}\n",
" threshold_schedule = \"0: (0.0)\"#@param {type:\"string\"}\n",
"\n",
" #@markdown ####**Coherence:**\n",
" color_coherence = 'Match Frame 0 LAB' #@param ['None', 'Match Frame 0
HSV', 'Match Frame 0 LAB', 'Match Frame 0 RGB', 'Video Input'] {type:'string'}\n",
" color_coherence_video_every_N_frames = 1 #@param {type:\"integer\"}\
n",
" diffusion_cadence = '1' #@param ['1','2','3','4','5','6','7','8']
{type:'string'}\n",
"\n",
" #@markdown ####**3D Depth Warping:**\n",
" use_depth_warping = True #@param {type:\"boolean\"}\n",
" midas_weight = 0.3#@param {type:\"number\"}\n",
" near_plane = 200\n",
" far_plane = 10000\n",
" fov = 40#@param {type:\"number\"}\n",
" padding_mode = 'border'#@param ['border', 'reflection', 'zeros']
{type:'string'}\n",
" sampling_mode = 'bicubic'#@param ['bicubic', 'bilinear', 'nearest']
{type:'string'}\n",
" save_depth_maps = False #@param {type:\"boolean\"}\n",
"\n",
" #@markdown ####**Video Input:**\n",
" video_init_path ='/content/video_in.mp4'#@param {type:\"string\"}\n",
" extract_nth_frame = 1#@param {type:\"number\"}\n",
" overwrite_extracted_frames = True #@param {type:\"boolean\"}\n",
" use_mask_video = False #@param {type:\"boolean\"}\n",
" video_mask_path ='/content/video_in.mp4'#@param {type:\"string\"}\n",
"\n",
" #@markdown ####**Hybrid Video for 2D/3D Animation Mode:**\n",
" hybrid_video_generate_inputframes = False #@param {type:\"boolean\"}\
n",
" hybrid_video_use_first_frame_as_init_image = True #@param
{type:\"boolean\"}\n",
" hybrid_video_motion = \"None\" #@param ['None','Optical
Flow','Perspective','Affine']\n",
" hybrid_video_flow_method = \"Farneback\" #@param
['Farneback','DenseRLOF','SF']\n",
" hybrid_video_composite = False #@param {type:\"boolean\"}\n",
" hybrid_video_comp_mask_type = \"None\" #@param ['None', 'Depth',
'Video Depth', 'Blend', 'Difference']\n",
" hybrid_video_comp_mask_inverse = False #@param {type:\"boolean\"}\n",
" hybrid_video_comp_mask_equalize = \"None\" #@param
['None','Before','After','Both']\n",
" hybrid_video_comp_mask_auto_contrast = False #@param
{type:\"boolean\"}\n",
" hybrid_video_comp_save_extra_frames = False #@param
{type:\"boolean\"}\n",
" hybrid_video_use_video_as_mse_image = False #@param
{type:\"boolean\"}\n",
"\n",
" #@markdown ####**Interpolation:**\n",
" interpolate_key_frames = False #@param {type:\"boolean\"}\n",
" interpolate_x_frames = 4 #@param {type:\"number\"}\n",
" \n",
" #@markdown ####**Resume Animation:**\n",
" resume_from_timestring = False #@param {type:\"boolean\"}\n",
" resume_timestring = \"20220829210106\" #@param {type:\"string\"}\n",
"\n",
" return locals()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "i9fly1RIWM_u"
},
"outputs": [],
"source": [
"prompts = [\n",
" \"a beautiful lake by Asher Brown Durand, trending on Artstation\", #
the first prompt I want\n",
" \"a beautiful portrait of a woman by Artgerm, trending on
Artstation\", # the second prompt I want\n",
" #\"this prompt I don't want it I commented it out\",\n",
" #\"a nousr robot, trending on Artstation\", # use \"nousr robot\" with
the robot diffusion model (see model_checkpoint setting)\n",
" #\"touhou 1girl komeiji_koishi portrait, green hair\", # waifu
diffusion prompts can use danbooru tag groups (see model_checkpoint)\n",
" #\"this prompt has weights if prompt weighting enabled:2 can also do
negative:-2\", # (see prompt_weighting)\n",
"]\n",
"\n",
"animation_prompts = {\n",
" 0: \"a beautiful apple, trending on Artstation\",\n",
" 20: \"a beautiful banana, trending on Artstation\",\n",
" 30: \"a beautiful coconut, trending on Artstation\",\n",
" 40: \"a beautiful durian, trending on Artstation\",\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "XVzhbmizWM_u"
},
"outputs": [],
"source": [
"#@markdown **Load Settings**\n",
"override_settings_with_file = False #@param {type:\"boolean\"}\n",
"settings_file = \"custom\" #@param
[\"custom\", \"512x512_aesthetic_0.json\",\"512x512_aesthetic_1.json\",\"512x512_co
lormatch_0.json\",\"512x512_colormatch_1.json\",\"512x512_colormatch_2.json\",\"512
x512_colormatch_3.json\"]\n",
"custom_settings_file = \"/content/drive/MyDrive/Settings.txt\"#@param
{type:\"string\"}\n",
"\n",
"def DeforumArgs():\n",
" #@markdown **Image Settings**\n",
" W = 512 #@param\n",
" H = 512 #@param\n",
" W, H = map(lambda x: x - x % 64, (W, H)) # resize to integer multiple
of 64\n",
" bit_depth_output = 8 #@param [8, 16, 32] {type:\"raw\"}\n",
"\n",
" #@markdown **Sampling Settings**\n",
" seed = -1 #@param\n",
" sampler = 'euler_ancestral' #@param
[\"klms\",\"dpm2\",\"dpm2_ancestral\",\"heun\",\"euler\",\"euler_ancestral\",\"plms
\", \"ddim\", \"dpm_fast\", \"dpm_adaptive\", \"dpmpp_2s_a\", \"dpmpp_2m\"]\n",
" steps = 50 #@param\n",
" scale = 7 #@param\n",
" ddim_eta = 0.0 #@param\n",
" dynamic_threshold = None\n",
" static_threshold = None \n",
"\n",
" #@markdown **Save & Display Settings**\n",
" save_samples = True #@param {type:\"boolean\"}\n",
" save_settings = True #@param {type:\"boolean\"}\n",
" display_samples = True #@param {type:\"boolean\"}\n",
" save_sample_per_step = False #@param {type:\"boolean\"}\n",
" show_sample_per_step = False #@param {type:\"boolean\"}\n",
"\n",
" #@markdown **Prompt Settings**\n",
" prompt_weighting = True #@param {type:\"boolean\"}\n",
" normalize_prompt_weights = True #@param {type:\"boolean\"}\n",
" log_weighted_subprompts = False #@param {type:\"boolean\"}\n",
"\n",
" #@markdown **Batch Settings**\n",
" n_batch = 1 #@param\n",
" batch_name = \"StableFun\" #@param {type:\"string\"}\n",
" filename_format = \"{timestring}_{index}_{prompt}.png\" #@param
[\"{timestring}_{index}_{seed}.png\",\"{timestring}_{index}_{prompt}.png\"]\n",
" seed_behavior = \"iter\" #@param
[\"iter\",\"fixed\",\"random\",\"ladder\",\"alternate\"]\n",
" seed_iter_N = 1 #@param {type:'integer'}\n",
" make_grid = False #@param {type:\"boolean\"}\n",
" grid_rows = 2 #@param \n",
" outdir = get_output_folder(root.output_path, batch_name)\n",
"\n",
" #@markdown **Init Settings**\n",
" use_init = False #@param {type:\"boolean\"}\n",
" strength = 0.1 #@param {type:\"number\"}\n",
" strength_0_no_init = True # Set the strength to 0 automatically when
no init image is used\n",
" init_image = \"https://cdn.pixabay.com/photo/2022/07/30/13/10/green-
longhorn-beetle-7353749_1280.jpg\" #@param {type:\"string\"}\n",
" # Whiter areas of the mask are areas that change more\n",
" use_mask = False #@param {type:\"boolean\"}\n",
" use_alpha_as_mask = False # use the alpha channel of the init image as
the mask\n",
" mask_file =
\"https://www.filterforge.com/wiki/images/archive/b/b7/20080927223728%21Polygonal_g
radient_thumb.jpg\" #@param {type:\"string\"}\n",
" invert_mask = False #@param {type:\"boolean\"}\n",
" # Adjust mask image, 1.0 is no adjustment. Should be positive
numbers.\n",
" mask_brightness_adjust = 1.0 #@param {type:\"number\"}\n",
" mask_contrast_adjust = 1.0 #@param {type:\"number\"}\n",
" # Overlay the masked image at the end of the generation so it does not
get degraded by encoding and decoding\n",
" overlay_mask = True # {type:\"boolean\"}\n",
" # Blur edges of final overlay mask, if used. Minimum = 0 (no blur)\n",
" mask_overlay_blur = 5 # {type:\"number\"}\n",
"\n",
" #@markdown **Exposure/Contrast Conditional Settings**\n",
" mean_scale = 0 #@param {type:\"number\"}\n",
" var_scale = 0 #@param {type:\"number\"}\n",
" exposure_scale = 0 #@param {type:\"number\"}\n",
" exposure_target = 0.5 #@param {type:\"number\"}\n",
"\n",
" #@markdown **Color Match Conditional Settings**\n",
" colormatch_scale = 0 #@param {type:\"number\"}\n",
" colormatch_image =
\"https://www.saasdesign.io/wp-content/uploads/2021/02/palette-3-min-980x588.png\"
#@param {type:\"string\"}\n",
" colormatch_n_colors = 4 #@param {type:\"number\"}\n",
" ignore_sat_weight = 0 #@param {type:\"number\"}\n",
"\n",
" #@markdown **CLIP\\Aesthetics Conditional Settings**\n",
" clip_name = 'ViT-L/14' #@param ['ViT-L/14', 'ViT-L/14@336px',
'ViT-B/16', 'ViT-B/32']\n",
" clip_scale = 0 #@param {type:\"number\"}\n",
" aesthetics_scale = 0 #@param {type:\"number\"}\n",
" cutn = 1 #@param {type:\"number\"}\n",
" cut_pow = 0.0001 #@param {type:\"number\"}\n",
"\n",
" #@markdown **Other Conditional Settings**\n",
" init_mse_scale = 0 #@param {type:\"number\"}\n",
" init_mse_image =
\"https://cdn.pixabay.com/photo/2022/07/30/13/10/green-longhorn-beetle-
7353749_1280.jpg\" #@param {type:\"string\"}\n",
"\n",
" blue_scale = 0 #@param {type:\"number\"}\n",
" \n",
" #@markdown **Conditional Gradient Settings**\n",
" gradient_wrt = 'x0_pred' #@param [\"x\", \"x0_pred\"]\n",
" gradient_add_to = 'both' #@param [\"cond\", \"uncond\", \"both\"]\n",
" decode_method = 'linear' #@param [\"autoencoder\",\"linear\"]\n",
" grad_threshold_type = 'dynamic' #@param
[\"dynamic\", \"static\", \"mean\", \"schedule\"]\n",
" clamp_grad_threshold = 0.2 #@param {type:\"number\"}\n",
" clamp_start = 0.2 #@param\n",
" clamp_stop = 0.01 #@param\n",
" grad_inject_timing = list(range(1,10)) #@param\n",
"\n",
" #@markdown **Speed vs VRAM Settings**\n",
" cond_uncond_sync = True #@param {type:\"boolean\"}\n",
"\n",
" n_samples = 1 # doesnt do anything\n",
" precision = 'autocast' \n",
" C = 4\n",
" f = 8\n",
"\n",
" prompt = \"\"\n",
" timestring = \"\"\n",
" init_latent = None\n",
" init_sample = None\n",
" init_sample_raw = None\n",
" mask_sample = None\n",
" init_c = None\n",
" seed_internal = 0\n",
"\n",
" return locals()\n",
"\n",
"args_dict = DeforumArgs()\n",
"anim_args_dict = DeforumAnimArgs()\n",
"\n",
"if override_settings_with_file:\n",
" load_args(args_dict, anim_args_dict, settings_file,
custom_settings_file, verbose=False)\n",
"\n",
"args = SimpleNamespace(**args_dict)\n",
"anim_args = SimpleNamespace(**anim_args_dict)\n",
"\n",
"args.timestring = time.strftime('%Y%m%d%H%M%S')\n",
"args.strength = max(0.0, min(1.0, args.strength))\n",
"\n",
"# Load clip model if using clip guidance\n",
"if (args.clip_scale > 0) or (args.aesthetics_scale > 0):\n",
" root.clip_model = clip.load(args.clip_name, jit=False)
[0].eval().requires_grad_(False).to(root.device)\n",
" if (args.aesthetics_scale > 0):\n",
" root.aesthetics_model = load_aesthetics_model(args, root)\n",
"\n",
"if args.seed == -1:\n",
" args.seed = random.randint(0, 2**32 - 1)\n",
"if not args.use_init:\n",
" args.init_image = None\n",
"if args.sampler == 'plms' and (args.use_init or anim_args.animation_mode !
= 'None'):\n",
" print(f\"Init images aren't supported with PLMS yet, switching to
KLMS\")\n",
" args.sampler = 'klms'\n",
"if args.sampler != 'ddim':\n",
" args.ddim_eta = 0\n",
"\n",
"if anim_args.animation_mode == 'None':\n",
" anim_args.max_frames = 1\n",
"elif anim_args.animation_mode == 'Video Input':\n",
" args.use_init = True\n",
"\n",
"# clean up unused memory\n",
"gc.collect()\n",
"torch.cuda.empty_cache()\n",
"\n",
"# dispatch to appropriate renderer\n",
"if anim_args.animation_mode == '2D' or anim_args.animation_mode == '3D':\
n",
" render_animation(args, anim_args, animation_prompts, root)\n",
"elif anim_args.animation_mode == 'Video Input':\n",
" render_input_video(args, anim_args, animation_prompts, root)\n",
"elif anim_args.animation_mode == 'Interpolation':\n",
" render_interpolation(args, anim_args, animation_prompts, root)\n",
"else:\n",
" render_image_batch(args, prompts, root)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "gJ88kZ2-WM_v"
},
"source": [
"# Create Video From Frames"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "XQGeqaGAWM_v"
},
"outputs": [],
"source": [
"skip_video_for_run_all = True #@param {type: 'boolean'}\n",
"fps = 12 #@param {type:\"number\"}\n",
"#@markdown **Manual Settings**\n",
"use_manual_settings = False #@param {type:\"boolean\"}\n",
"image_path =
\"/content/drive/MyDrive/AI/StableDiffusion/2023-01/StableFun/20230101212135_
%05d.png\" #@param {type:\"string\"}\n",
"mp4_path =
\"/content/drive/MyDrive/AI/StableDiffusion/2023-01/StableFun/20230101212135.mp4\"
#@param {type:\"string\"}\n",
"render_steps = False #@param {type: 'boolean'}\n",
"path_name_modifier = \"x0_pred\" #@param [\"x0_pred\",\"x\"]\n",
"make_gif = False\n",
"bitdepth_extension = \"exr\" if args.bit_depth_output == 32 else \"png\"\
n",
"\n",
"if skip_video_for_run_all == True:\n",
" print('Skipping video creation, uncheck skip_video_for_run_all if you
want to run it')\n",
"else:\n",
" import os\n",
" import subprocess\n",
" from base64 import b64encode\n",
"\n",
" print(f\"{image_path} -> {mp4_path}\")\n",
"\n",
" if use_manual_settings:\n",
" max_frames = \"200\" #@param {type:\"string\"}\n",
" else:\n",
" if render_steps: # render steps from a single image\n",
" fname = f\"{path_name_modifier}_%05d.png\"\n",
" all_step_dirs = [os.path.join(args.outdir, d) for d in
os.listdir(args.outdir) if os.path.isdir(os.path.join(args.outdir,d))]\n",
" newest_dir = max(all_step_dirs, key=os.path.getmtime)\n",
" image_path = os.path.join(newest_dir, fname)\n",
" print(f\"Reading images from {image_path}\")\n",
" mp4_path = os.path.join(newest_dir,
f\"{args.timestring}_{path_name_modifier}.mp4\")\n",
" max_frames = str(args.steps)\n",
" else: # render images for a video\n",
" image_path = os.path.join(args.outdir, f\"{args.timestring}_
%05d.{bitdepth_extension}\")\n",
" mp4_path = os.path.join(args.outdir,
f\"{args.timestring}.mp4\")\n",
" max_frames = str(anim_args.max_frames)\n",
"\n",
" # make video\n",
" cmd = [\n",
" 'ffmpeg',\n",
" '-y',\n",
" '-vcodec', bitdepth_extension,\n",
" '-r', str(fps),\n",
" '-start_number', str(0),\n",
" '-i', image_path,\n",
" '-frames:v', max_frames,\n",
" '-c:v', 'libx264',\n",
" '-vf',\n",
" f'fps={fps}',\n",
" '-pix_fmt', 'yuv420p',\n",
" '-crf', '17',\n",
" '-preset', 'veryfast',\n",
" '-pattern_type', 'sequence',\n",
" mp4_path\n",
" ]\n",
" process = subprocess.Popen(cmd, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)\n",
" stdout, stderr = process.communicate()\n",
" if process.returncode != 0:\n",
" print(stderr)\n",
" raise RuntimeError(stderr)\n",
"\n",
" mp4 = open(mp4_path,'rb').read()\n",
" data_url = \"data:video/mp4;base64,\" + b64encode(mp4).decode()\n",
" display.display(display.HTML(f'<video controls loop><source
src=\"{data_url}\" type=\"video/mp4\"></video>') )\n",
" \n",
" if make_gif:\n",
" gif_path = os.path.splitext(mp4_path)[0]+'.gif'\n",
" cmd_gif = [\n",
" 'ffmpeg',\n",
" '-y',\n",
" '-i', mp4_path,\n",
" '-r', str(fps),\n",
" gif_path\n",
" ]\n",
" process_gif = subprocess.Popen(cmd_gif, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "MMpAcyrYWM_v"
},
"outputs": [],
"source": [
"skip_disconnect_for_run_all = True #@param {type: 'boolean'}\n",
"\n",
"if skip_disconnect_for_run_all == True:\n",
" print('Skipping disconnect, uncheck skip_disconnect_for_run_all if you
want to run it')\n",
"else:\n",
" from google.colab import runtime\n",
" runtime.unassign()"
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"provenance": []
},
"gpuClass": "standard",
"kernelspec": {
"display_name": "Python 3.10.6 ('dsd')",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.8"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "b7e04c8a9537645cbc77fa0cbde8069bc94e341b0d5ced104651213865b24e58"
}
}
},
"nbformat": 4,
"nbformat_minor": 0
}

You might also like