首页 文章 精选 留言 我的

精选列表

搜索[搭建],共10000篇文章
优秀的个人博客,低调大师

无需搭建环境,零门槛带你体验Open-Sora文生视频应用

本文分享自华为云社区《Open-Sora 文生视频原来在AI Gallery上也能体验了》,作者:码上开花_Lancer。 体验链接:Open-Sora 文生视频案例体验 不久前,OpenAI Sora 凭借其惊人的视频生成效果迅速走红,在一堆文本转视频模型中脱颖而出,成为全球关注的焦点。之后,Colossal-AI团队又推出了新的开源解决方案“Open-Sora 1.0”,涵盖了整个训练过程,包括数据处理、所有训练细节和模型检查点,与世界各地的AI爱好者携手推进视频创作的新时代。 详细内容请参考:https://hpc-ai.com/blog/open-sora-v1.0 2024年4月份又更新了Open-Sora 1.1,它可以生成2s~15s,144p到720p分辨率的视频,支持文本到图像、文本到视频以及图像到视频的生成,让我们来看看Open-Sora 1.1的实际视频生成效果: 案例体验 ?*本案例需使用 Pytorch-2.0.1 GPU-V100 及以上规格运行 ?* 点击Run in ModelArts,将会进入到ModelArts CodeLab中,这时需要你登录华为云账号,如果没有账号,则需要注册一个,且要进行实名认证,参考《如何创建华为云账号并且实名认证》即可完成账号注册和实名认证。 登录之后,等待片刻,即可进入到CodeLab的运行环境 ?* 出现 Out Of Memory ,请检查是否为您的参数配置过高导致,修改参数配置,重启kernel或更换更高规格资源进行规避❗❗❗ 1. 下载代码和模型 此处运行大约需要1分钟,请耐心等待! import os import moxing as mox ​ if not os.path.exists('Open-Sora'): mox.file.copy_parallel('obs://modelbox-course/open-sora_1.1/Open-Sora', 'Open-Sora') if not os.path.exists('/home/ma-user/.cache/huggingface'): mox.file.copy_parallel('obs://modelbox-course/huggingface', '/home/ma-user/.cache/huggingface') if not os.path.exists('Open-Sora/opensora/models/sd-vae-ft-ema'): mox.file.copy_parallel('obs://modelbox-course/sd-vae-ft-ema', 'Open-Sora/opensora/models/sd-vae-ft-ema') ​ if not os.path.exists('Open-Sora/opensora/models/text_encoder/t5-v1_1-xxl'): mox.file.copy_parallel('obs://modelbox-course/t5-v1_1-xxl', 'Open-Sora/opensora/models/text_encoder/t5-v1_1-xxl') if not os.path.exists('/home/ma-user/work/t5.py'): mox.file.copy_parallel('obs://modelbox-course/open-sora_1.1/t5.py', '/home/ma-user/work/t5.py') if not os.path.exists('Open-Sora/opus-mt-zh-en'): mox.file.copy_parallel('obs://modelarts-labs-bj4-v2/course/ModelBox/opus-mt-zh-en', 'Open-Sora/opus-mt-zh-en') if not os.path.exists('/home/ma-user/work/frpc_linux_amd64'): mox.file.copy_parallel('obs://modelarts-labs-bj4-v2/course/ModelBox/frpc_linux_amd64', '/home/ma-user/work/frpc_linux_amd64') INFO:root:Using MoXing-v2.1.6.879ab2f4-879ab2f4 ​ INFO:root:List OBS time cost: 0.02 seconds. ​ INFO:root:Copy parallel total time cost: 41.71 seconds. ​ INFO:root:List OBS time cost: 0.14 seconds. ​ INFO:root:Copy parallel total time cost: 2.91 seconds. 2. 配置运行环境 本案例依赖Python3.10.10及以上环境,因此我们首先创建虚拟环境: !/home/ma-user/anaconda3/bin/conda clean -i !/home/ma-user/anaconda3/bin/conda create -n python-3.10.10 python=3.10.10 -y --override-channels --channel https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main !/home/ma-user/anaconda3/envs/python-3.10.10/bin/pip install ipykernel /home/ma-user/anaconda3/lib/python3.7/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.15) or chardet (3.0.4) doesn't match a supported version! RequestsDependencyWarning) /home/ma-user/anaconda3/lib/python3.7/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.15) or chardet (3.0.4) doesn't match a supported version! RequestsDependencyWarning) Collecting package metadata (current_repodata.json): done Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: done ​ ## Package Plan ## ​ environment location: /home/ma-user/anaconda3/envs/python-3.10.10 ​ added / updated specs: - python=3.10.10 The following packages will be downloaded: ​ package * build ---------------------------*----------------- _libgcc_mutex-0.1 * main 3 KB https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main _openmp_mutex-5.1 * 1_gnu 21 KB https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main bzip2-1.0.8 * h5eee18b_6 262 KB https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main ca-certificates-2024.3.11 * h06a4308_0 127 KB https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main ld_impl_linux-64-2.38 * h1181459_1 654 KB https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main libffi-3.4.4 * h6a678d5_1 141 KB https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main libgcc-ng-11.2.0 * h1234567_1 5.3 MB https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main libgomp-11.2.0 * h1234567_1 474 KB https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main libstdcxx-ng-11.2.0 * h1234567_1 4.7 MB https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main libuuid-1.41.5 * h5eee18b_0 27 KB https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main ncurses-6.4 * h6a678d5_0 914 KB https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main openssl-1.1.1w * h7f8727e_0 3.7 MB https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main pip-24.0 * py310h06a4308_0 2.7 MB https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main python-3.10.10 * h7a1cb2a_2 26.9 MB https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main readline-8.2 * h5eee18b_0 357 KB https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main setuptools-69.5.1 * py310h06a4308_0 1012 KB https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main sqlite-3.45.3 * h5eee18b_0 1.2 MB https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main tk-8.6.14 * h39e8969_0 3.4 MB https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main tzdata-2024a * h04d1e81_0 116 KB https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main wheel-0.43.0 * py310h06a4308_0 110 KB https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main xz-5.4.6 * h5eee18b_1 643 KB https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main zlib-1.2.13 * h5eee18b_1 111 KB https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main ------------------------------------------------------------ Total: 52.8 MB ​ The following NEW packages will be INSTALLED: ​ _libgcc_mutex anaconda/pkgs/main/linux-64::_libgcc_mutex-0.1-main _openmp_mutex anaconda/pkgs/main/linux-64::_openmp_mutex-5.1-1_gnu bzip2 anaconda/pkgs/main/linux-64::bzip2-1.0.8-h5eee18b_6 ca-certificates anaconda/pkgs/main/linux-64::ca-certificates-2024.3.11-h06a4308_0 ld_impl_linux-64 anaconda/pkgs/main/linux-64::ld_impl_linux-64-2.38-h1181459_1 libffi anaconda/pkgs/main/linux-64::libffi-3.4.4-h6a678d5_1 libgcc-ng anaconda/pkgs/main/linux-64::libgcc-ng-11.2.0-h1234567_1 libgomp anaconda/pkgs/main/linux-64::libgomp-11.2.0-h1234567_1 libstdcxx-ng anaconda/pkgs/main/linux-64::libstdcxx-ng-11.2.0-h1234567_1 libuuid anaconda/pkgs/main/linux-64::libuuid-1.41.5-h5eee18b_0 ncurses anaconda/pkgs/main/linux-64::ncurses-6.4-h6a678d5_0 openssl anaconda/pkgs/main/linux-64::openssl-1.1.1w-h7f8727e_0 pip anaconda/pkgs/main/linux-64::pip-24.0-py310h06a4308_0 python anaconda/pkgs/main/linux-64::python-3.10.10-h7a1cb2a_2 readline anaconda/pkgs/main/linux-64::readline-8.2-h5eee18b_0 setuptools anaconda/pkgs/main/linux-64::setuptools-69.5.1-py310h06a4308_0 sqlite anaconda/pkgs/main/linux-64::sqlite-3.45.3-h5eee18b_0 tk anaconda/pkgs/main/linux-64::tk-8.6.14-h39e8969_0 tzdata anaconda/pkgs/main/noarch::tzdata-2024a-h04d1e81_0 wheel anaconda/pkgs/main/linux-64::wheel-0.43.0-py310h06a4308_0 xz anaconda/pkgs/main/linux-64::xz-5.4.6-h5eee18b_1 zlib anaconda/pkgs/main/linux-64::zlib-1.2.13-h5eee18b_1 Downloading and Extracting Packages libffi-3.4.4 * 141 KB * ##################################### * 100% _openmp_mutex-5.1 * 21 KB * ##################################### * 100% xz-5.4.6 * 643 KB * ##################################### * 100% tzdata-2024a * 116 KB * ##################################### * 100% _libgcc_mutex-0.1 * 3 KB * ##################################### * 100% zlib-1.2.13 * 111 KB * ##################################### * 100% bzip2-1.0.8 * 262 KB * ##################################### * 100% libuuid-1.41.5 * 27 KB * ##################################### * 100% ca-certificates-2024 * 127 KB * ##################################### * 100% libstdcxx-ng-11.2.0 * 4.7 MB * ##################################### * 100% ncurses-6.4 * 914 KB * ##################################### * 100% openssl-1.1.1w * 3.7 MB * ##################################### * 100% wheel-0.43.0 * 110 KB * ##################################### * 100% python-3.10.10 * 26.9 MB * ##################################### * 100% pip-24.0 * 2.7 MB * ##################################### * 100% readline-8.2 * 357 KB * ##################################### * 100% tk-8.6.14 * 3.4 MB * ##################################### * 100% setuptools-69.5.1 * 1012 KB * ##################################### * 100% libgcc-ng-11.2.0 * 5.3 MB * ##################################### * 100% ld_impl_linux-64-2.3 * 654 KB * ##################################### * 100% libgomp-11.2.0 * 474 KB * ##################################### * 100% sqlite-3.45.3 * 1.2 MB * ##################################### * 100% Preparing transaction: done Verifying transaction: done Executing transaction: done # # To activate this environment, use # # $ conda activate python-3.10.10 # # To deactivate an active environment, use # # $ conda deactivate ​ Looking in indexes: http://repo.myhuaweicloud.com/repository/pypi/simple Collecting ipykernel Downloading http://repo.myhuaweicloud.com/repository/pypi/packages/53/9d/40d5207db523363d9b5698f33778c18b0d591e3fdb6e0116b894b2a2491c/ipykernel-6.29.4-py3-none-any.whl (117 kB) •[2K •[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━•[0m •[32m117.1/117.1 kB•[0m •[31m10.6 MB/s•[0m eta •[36m0:00:00•[0m ...... ​ Downloading http://repo.myhuaweicloud.com/repository/pypi/packages/80/03/6ea8b1b2a5ab40a7a60dc464d3daa7aa546e0a74d74a9f8ff551ea7905db/executing-2.0.1-py2.py3-none-any.whl (24 kB) Collecting asttokens>=2.1.0 (from stack-data->ipython>=7.23.1->ipykernel) Downloading http://repo.myhuaweicloud.com/repository/pypi/packages/45/86/4736ac618d82a20d87d2f92ae19441ebc7ac9e7a581d7e58bbe79233b24a/asttokens-2.4.1-py2.py3-none-any.whl (27 kB) Collecting pure-eval (from stack-data->ipython>=7.23.1->ipykernel) Downloading http://repo.myhuaweicloud.com/repository/pypi/packages/2b/27/77f9d5684e6bce929f5cfe18d6cfbe5133013c06cb2fbf5933670e60761d/pure_eval-0.2.2-py3-none-any.whl (11 kB) Installing collected packages: wcwidth, pure-eval, ptyprocess, typing-extensions, traitlets, tornado, six, pyzmq, pygments, psutil, prompt-toolkit, platformdirs, pexpect, parso, packaging, nest-asyncio, executing, exceptiongroup, decorator, debugpy, python-dateutil, matplotlib-inline, jupyter-core, jedi, comm, asttokens, stack-data, jupyter-client, ipython, ipykernel Successfully installed asttokens-2.4.1 comm-0.2.2 debugpy-1.8.1 decorator-5.1.1 exceptiongroup-1.2.1 executing-2.0.1 ipykernel-6.29.4 ipython-8.25.0 jedi-0.19.1 jupyter-client-8.6.2 jupyter-core-5.7.2 matplotlib-inline-0.1.7 nest-asyncio-1.6.0 packaging-24.0 parso-0.8.4 pexpect-4.9.0 platformdirs-4.2.2 prompt-toolkit-3.0.46 psutil-5.9.8 ptyprocess-0.7.0 pure-eval-0.2.2 pygments-2.18.0 python-dateutil-2.9.0.post0 pyzmq-26.0.3 six-1.16.0 stack-data-0.6.3 tornado-6.4 traitlets-5.14.3 typing-extensions-4.12.1 wcwidth-0.2.13 import json import os ​ data = { "display_name": "python-3.10.10", "env": { "PATH": "/home/ma-user/anaconda3/envs/python-3.10.10/bin:/home/ma-user/anaconda3/envs/python-3.7.10/bin:/modelarts/authoring/notebook-conda/bin:/opt/conda/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/home/ma-user/modelarts/ma-cli/bin:/home/ma-user/modelarts/ma-cli/bin:/home/ma-user/anaconda3/envs/PyTorch-1.8/bin" }, "language": "python", "argv": [ "/home/ma-user/anaconda3/envs/python-3.10.10/bin/python", "-m", "ipykernel", "-f", "{connection_file}" ] } ​ if not os.path.exists("/home/ma-user/anaconda3/share/jupyter/kernels/python-3.10.10/"): os.mkdir("/home/ma-user/anaconda3/share/jupyter/kernels/python-3.10.10/") ​ with open('/home/ma-user/anaconda3/share/jupyter/kernels/python-3.10.10/kernel.json', 'w') as f: json.dump(data, f, indent=4) conda env list /home/ma-user/anaconda3/lib/python3.7/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.15) or chardet (3.0.4) doesn't match a supported version! RequestsDependencyWarning) # conda environments: # base * /home/ma-user/anaconda3 python-3.10.10 /home/ma-user/anaconda3/envs/python-3.10.10 python-3.7.10 /home/ma-user/anaconda3/envs/python-3.7.10 Note: you may need to restart the kernel to use updated packages. 创建完成后,稍等片刻,或刷新页面,点击右上角kernel选择python-3.10.10 查看Python版本 !python -V Python 3.10.10 检查可用GPU,至少需要32GB显存 !nvidia-smi Wed Jun 5 16:22:37 2024 +-----------------------------------------------------------------------------+ * NVIDIA-SMI 470.57.02 Driver Version: 470.57.02 CUDA Version: 11.4 * *-------------------------------+----------------------+----------------------+ * GPU Name Persistence-M* Bus-Id Disp.A * Volatile Uncorr. ECC * * Fan Temp Perf Pwr:Usage/Cap* Memory-Usage * GPU-Util Compute M. * * * * MIG M. * *===============================+======================+======================* * 0 Tesla V100-PCIE... On * 00000000:00:0D.0 Off * 0 * * N/A 28C P0 25W / 250W * 0MiB / 32510MiB * 0% Default * * * * N/A * +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ * Processes: * * GPU GI CI PID Type Process name GPU Memory * * ID ID Usage * *=============================================================================* * No running processes found * +-----------------------------------------------------------------------------+ 安装依赖包 !pip install --upgrade pip !pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 xformers==0.0.22 Looking in indexes: http://repo.myhuaweicloud.com/repository/pypi/simple Requirement already satisfied: pip in /home/ma-user/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages (24.0) Looking in indexes: http://repo.myhuaweicloud.com/repository/pypi/simple Collecting torch==2.0.1 Downloading http://repo.myhuaweicloud.com/repository/pypi/packages/8c/4d/17e07377c9c3d1a0c4eb3fde1c7c16b5a0ce6133ddbabc08ceef6b7f2645/torch-2.0.1-cp310-cp310-manylinux1_x86_64.whl (619.9 MB) •[2K •[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━•[0m •[32m619.9/619.9 MB•[0m •[31m8.2 MB/s•[0m eta •[36m0:00:00•[0m00:01•[0m00:01•[0m •[?25hCollecting torchvision==0.15.2 Downloading http://repo.myhuaweicloud.com/repository/pypi/packages/87/0f/88f023bf6176d9af0f85feedf4be129f9cf2748801c4d9c690739a10c100/torchvision-0.15.2-cp310-cp310-manylinux1_x86_64.whl (6.0 MB) •[2K •[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━•[0m •[32m6.0/6.0 MB•[0m •[31m109.5 MB/s•[0m eta •[36m0:00:00•[0ma •[36m0:00:01•[0m •[?25hCollecting torchaudio==2.0.2 Downloading •[?25hCollecting certifi>=2017.4.17 (from requests->torchvision==0.15.2) Downloading http://repo.myhuaweicloud.com/repository/pypi/packages/5b/11/1e78951465b4a225519b8c3ad29769c49e0d8d157a070f681d5b6d64737f/certifi-2024.6.2-py3-none-any.whl (164 kB) •[2K •[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━•[0m •[32m164.4/164.4 kB•[0m •[31m23.1 MB/s•[0m eta •[36m0:00:00•[0m •[?25hCollecting mpmath<1.4.0,>=1.1.0 (from sympy->torch==2.0.1) Downloading http://repo.myhuaweicloud.com/repository/pypi/packages/43/e3/7d92a15f894aa0c9c4b49b8ee9ac9850d6e63b03c9c32c0367a13ae62209/mpmath-1.3.0-py3-none-any.whl (536 kB) •[2K •[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━•[0m •[32m536.2/536.2 kB•[0m •[31m32.8 MB/s•[0m eta •[36m0:00:00•[0m •[?25hInstalling collected packages: mpmath, lit, urllib3, sympy, pillow, nvidia-nvtx-cu11, nvidia-nccl-cu11, nvidia-cusparse-cu11, nvidia-curand-cu11, nvidia-cufft-cu11, nvidia-cuda-runtime-cu11, nvidia-cuda-nvrtc-cu11, nvidia-cuda-cupti-cu11, nvidia-cublas-cu11, numpy, networkx, MarkupSafe, idna, filelock, cmake, charset-normalizer, certifi, requests, nvidia-cusolver-cu11, nvidia-cudnn-cu11, jinja2, triton, torch, xformers, torchvision, torchaudio Successfully installed MarkupSafe-2.1.5 certifi-2024.6.2 charset-normalizer-3.3.2 cmake-3.29.3 filelock-3.14.0 idna-3.7 jinja2-3.1.4 lit-18.1.6 mpmath-1.3.0 networkx-3.3 numpy-1.26.4 nvidia-cublas-cu11-11.10.3.66 nvidia-cuda-cupti-cu11-11.7.101 nvidia-cuda-nvrtc-cu11-11.7.99 nvidia-cuda-runtime-cu11-11.7.99 nvidia-cudnn-cu11-8.5.0.96 nvidia-cufft-cu11-10.9.0.58 nvidia-curand-cu11-10.2.10.91 nvidia-cusolver-cu11-11.4.0.1 nvidia-cusparse-cu11-11.7.4.91 nvidia-nccl-cu11-2.14.3 nvidia-nvtx-cu11-11.7.91 pillow-10.3.0 requests-2.32.3 sympy-1.12.1 torch-2.0.1 torchaudio-2.0.2 torchvision-0.15.2 triton-2.0.0 urllib3-2.2.1 xformers-0.0.22 %cd Open-Sora /home/ma-user/work/ma_share/open-spra_1/Open-Sora /home/ma-user/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages/IPython/core/magics/osm.py:417: UserWarning: This is now an optional IPython functionality, setting dhist requires you to install the `pickleshare` library. self.shell.db['dhist'] = compress_dhist(dhist)[-100:] '/home/ma-user/work/ma_share/open-spra_1/Open-Sora' !pip install colossalai==0.3.6 accelerate==0.29.2 diffusers==0.27.2 ftfy==6.2.0 gdown==5.1.0 mmengine==0.10.3 pre-commit==3.7.0 pyav==12.0.5 tensorboard==2.16.2 timm==0.9.16 transformers==4.39.3 wandb==0.16.6 Looking in indexes: http://repo.myhuaweicloud.com/repository/pypi/simple Collecting colossalai==0.3.6 Downloading http://repo.myhuaweicloud.com/repository/pypi/packages/05/ed/57e80620ea8e35c3aa63a3207720b1890700fd12eea38b6592e9833e5c1b/colossalai-0.3.6.tar.gz (1.1 MB) •[2K •[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━•[0m •[32m1.1/1.1 MB•[0m •[31m36.5 MB/s•[0m eta •[36m0:00:00•[0m •[?25h Preparing metadata (setup.py) ... •[?25ldone •[?25hCollecting accelerate==0.29.2 Downloading http://repo.myhuaweicloud.com/repository/pypi/packages/1b/e8/2fc7af3fa77ddac89a9c9b390d2d31d1db0612247ba2274009946959604e/accelerate-0.29.2-py3-none-any.whl (297 kB) •[2K •[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━•[0m •[32m297.4/297.4 kB•[0m •[31m14.5 MB/s•[0m eta •[36m0:00:00•[0m •[?25hCollecting diffusers==0.27.2 Downloading http://repo.myhuaweicloud.com/repository/pypi/packages/75/c5/3b84fd731dd93c549a0c25657e4ce5a957aeccd32d60dba2958cd3cdac23/diffusers-0.27.2-py3-none-any.whl (2.0 MB) ​ !pip install . Looking in indexes: http://repo.myhuaweicloud.com/repository/pypi/simple Processing /home/ma-user/work/ma_share/open-spra_1/Open-Sora Preparing metadata (setup.py) ... •[?25ldone •[?25hRequirement already satisfied: colossalai in /home/ma-user/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages (from opensora==1.1.0) (0.3.6) Requirement already satisfied: accelerate in /home/ma-user/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages (from opensora==1.1.0) (0.29.2) Requirement already satisfied: diffusers in /home/ma-user/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages (from opensora==1.1.0) (0.27.2) Requirement already satisfied: ftfy in /home/ma-user/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages (from opensora==1.1.0) (6.2.0) Requirement already satisfied: gdown in /home/ma-user/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages (from opensora==1.1.0) (5.1.0) Requirement already satisfied: mmengine in /home/ma-user/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages (from opensora==1.1.0) (0.10.3) Collecting pandas (from opensora==1.1.0) Downloading http://repo.myhuaweicloud.com/repository/pypi/packages/89/1b/12521efcbc6058e2673583bb096c2b5046a9df39bd73eca392c1efed24e5/pandas-2.2.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (13.0 MB) •[2K •[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━•[0m •[32m13.0/13.0 MB•[0m •[31m60.4 MB/s•[0m eta •[36m0:00:00•[0m00:01•[0m00:01•[0m •[?25hRequirement already satisfied: pre-commit in /home/ma-user/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages (from opensora==1.1.0) (3.7.0) Collecting pyarrow (from opensora==1.1.0) Downloading http://repo.myhuaweicloud.com/repository/pypi/packages/91/83/57572c088ec185582f04b607d545a4a6ef7599c0a3c1e60d397743b0d609/pyarrow-16.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (40.9 MB) •[2K •[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━•[0m •[32m40.9/40.9 MB•[0m •[31m36.9 MB/s•[0m eta •[36m0:00:00•[0m00:01•[0m00:01•[0m •[?25hCollecting av (from opensora==1.1.0) Downloading http://repo.myhuaweicloud.com/repository/pypi/packages/0a/11/2b501d0a4de22826217a0b909e832f52fb5d503df50f424f3e31023a7bcc/av-12.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (34.3 MB) •[2K •[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━•[0m •[32m34.3/34.3 MB•[0m •[31m96.1 MB/s•[0m eta •[36m0:00:00•[0m00:01•[0m00:01•[0m •[?25hRequirement already satisfied: tensorboard in /home/ma-user/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages (from opensora==1.1.0) (2.16.2) Requirement already satisfied: timm in /home/ma-user/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages (from opensora==1.1.0) (0.9.16) Requirement already satisfied: tqdm in /home/ma-user/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages (from opensora==1.1.0) (4.66.4) Requirement already satisfied: transformers in /home/ma-user/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages (from opensora==1.1.0) (4.39.3) Requirement already satisfied: wandb in /home/ma-user/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages (from opensora==1.1.0) (0.16.6) Collecting rotary_embedding_torch (from opensora==1.1.0) Downloading Building wheels for collected packages: opensora, pandarallel Building wheel for opensora (setup.py) ... •[?25ldone •[?25h Created wheel for opensora: filename=opensora-1.1.0-py3-none-any.whl size=195249 sha256=86c66de7ded305b2e4fb07992d0147c0408086cc31cdc31d97bcea44d8f69596 Stored in directory: /home/ma-user/.cache/pip/wheels/ae/34/85/7f84dd36f2e448d8d4455272d3358f557d0a570011d1701074 Building wheel for pandarallel (setup.py) ... •[?25ldone •[?25h Created wheel for pandarallel: filename=pandarallel-1.6.5-py3-none-any.whl size=16673 sha256=b97386c92d34443f19cc88ea717c6cca143ef2b8f1f1ac79f4645c37d230bafc Stored in directory: /home/ma-user/.cache/pip/wheels/f6/dd/25/a1c3775e721641ff67c71b3652e901e7e52611c6c3091784c9 Successfully built opensora pandarallel Installing collected packages: pytz, tzdata, pyarrow, dill, beartype, av, pandas, pandarallel, rotary_embedding_torch, opensora Successfully installed av-12.1.0 beartype-0.18.5 dill-0.3.8 opensora-1.1.0 pandarallel-1.6.5 pandas-2.2.2 pyarrow-16.1.0 pytz-2024.1 rotary_embedding_torch-0.6.2 tzdata-2024.1 !pip install spaces gradio MoviePy -i https://pypi.tuna.tsinghua.edu.cn/simple --trusted-host pypi.tuna.tsinghua.edu.cn !cp /home/ma-user/work/frpc_linux_amd64 /home/ma-user/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages/gradio/frpc_linux_amd64_v0.2 !chmod +x /home/ma-user/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages/gradio/frpc_linux_amd64_v0.2 Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting spaces Downloading https://pypi.tuna.tsinghua.edu.cn/packages/b2/3c/6205090507ea96e6e56d0deda8d0fc4c507026ef3772e55b637a5d0b7c61/spaces-0.28.3-py3-none-any.whl (18 kB) Collecting gradio Downloading https://pypi.tuna.tsinghua.edu.cn/packages/d1/37/f49320600cdf1fa856cc605a2e20e9debd34b5425b53f49abdb2ea463716/gradio-4.32.2-py3-none-any.whl (12.3 MB) •[2K •[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━•[0m •[32m12.3/12.3 MB•[0m •[31m5.2 MB/s•[0m eta •[36m0:00:00•[0m00:01•[0m00:01•[0m ​ Successfully uninstalled decorator-5.1.1 •[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. fabric 3.2.2 requires decorator>=5, but you have decorator 4.4.2 which is incompatible.•[0m•[31m •[0mSuccessfully installed MoviePy-1.0.3 aiofiles-23.2.1 altair-5.3.0 anyio-4.4.0 decorator-4.4.2 dnspython-2.6.1 email_validator-2.1.1 fastapi-0.111.0 fastapi-cli-0.0.4 ffmpy-0.3.2 gradio-4.32.2 gradio-client-0.17.0 h11-0.14.0 httpcore-1.0.5 httptools-0.6.1 httpx-0.27.0 imageio-2.34.1 imageio_ffmpeg-0.5.1 importlib-resources-6.4.0 orjson-3.10.3 proglog-0.1.10 pydub-0.25.1 python-dotenv-1.0.1 python-multipart-0.0.9 ruff-0.4.7 semantic-version-2.10.0 shellingham-1.5.4 sniffio-1.3.1 spaces-0.28.3 starlette-0.37.2 tomlkit-0.12.0 toolz-0.12.1 typer-0.12.3 ujson-5.10.0 uvicorn-0.30.1 uvloop-0.19.0 watchfiles-0.22.0 websockets-11.0.3 3. 生成视频 修改模型配置文件: %%writefile configs/opensora-v1-1/inference/sample.py num_frames = 16 frame_interval = 3 fps = 24 image_size = (240, 426) multi_resolution = "STDiT2" ​ # Define model model = dict( type="STDiT2-XL/2", from_pretrained="hpcai-tech/OpenSora-STDiT-v2-stage3", input_sq_size=512, # 使用huggingface上下载好的模型权重 qk_norm=True, enable_flash_attn=True, enable_layernorm_kernel=True, ) vae = dict( type="VideoAutoencoderKL", from_pretrained="./opensora/models/sd-vae-ft-ema", cache_dir=None, # 修改为从当前目录加载 micro_batch_size=4, ) text_encoder = dict( type="t5", from_pretrained="./opensora/models/text_encoder/t5-v1_1-xxl", cache_dir=None, # 修改为从当前目录加载 model_max_length=200, ) scheduler = dict( type="iddpm", num_sampling_steps=100, cfg_scale=7.0, cfg_channel=3, # or None ) dtype = "fp16" ​ # Condition prompt_path = "./assets/texts/t2v_samples.txt" prompt = None # prompt has higher priority than prompt_path ​ # Others batch_size = 1 seed = 42 save_dir = "./samples/samples/" Overwriting configs/opensora-v1-1/inference/sample.py import os ​ os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com' !cp /home/ma-user/work/t5.py /home/ma-user/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages/opensora/models/text_encoder/t5.py # text to video !python scripts/inference.py configs/opensora-v1-1/inference/sample.py --prompt "A fashion girl walking on the streets of Tokyo" --num-frames 32 --image-size 240 426 /home/ma-user/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages/colossalai/shardformer/layer/normalization.py:45: UserWarning: Please install apex from source (https://github.com/NVIDIA/apex) to use the fused layernorm kernel warnings.warn("Please install apex from source (https://github.com/NVIDIA/apex) to use the fused layernorm kernel") Config (path: configs/opensora-v1-1/inference/sample.py): {'num_frames': 32, 'frame_interval': 3, 'fps': 24, 'image_size': [240, 426], 'multi_resolution': 'STDiT2', 'model': {'type': 'STDiT2-XL/2', 'from_pretrained': 'hpcai-tech/OpenSora-STDiT-v2-stage3', 'input_sq_size': 512, 'qk_norm': True, 'enable_flash_attn': True, 'enable_layernorm_kernel': True}, 'vae': {'type': 'VideoAutoencoderKL', 'from_pretrained': './opensora/models/sd-vae-ft-ema', 'cache_dir': None, 'micro_batch_size': 4}, 'text_encoder': {'type': 't5', 'from_pretrained': './opensora/models/text_encoder/t5-v1_1-xxl', 'cache_dir': None, 'model_max_length': 200}, 'scheduler': {'type': 'iddpm', 'num_sampling_steps': 100, 'cfg_scale': 7.0, 'cfg_channel': 3}, 'dtype': 'fp16', 'prompt_path': './assets/texts/t2v_samples.txt', 'prompt': ['A fashion girl walking on the streets of Tokyo'], 'batch_size': 1, 'seed': 42, 'save_dir': './samples/samples/', 'config': 'configs/opensora-v1-1/inference/sample.py', 'prompt_as_path': False, 'reference_path': None, 'loop': 1, 'sample_name': None, 'num_sample': 1} Loading checkpoint shards: 0%* * 0/2 [00:00<?, ?it/s]/home/ma-user/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages/torch/_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() return self.fget.__get__(instance, owner)() Loading checkpoint shards: 100%*██████████████████* 2/2 [00:35<00:00, 17.87s/it] /home/ma-user/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. warnings.warn( 100%*█████████████████████████████████████████* 100/100 [02:11<00:00, 1.32s/it] Prompt: A fashion girl walking on the streets of Tokyo Saved to ./samples/samples/sample_0.mp4 生成的视频保存在Open-Sora/samples文件夹中,随机查看: import os import random from moviepy.editor import * from IPython.display import Image ​ # 视频存放目录 video_root = 'samples/samples' # 列出所有文件 videos = os.listdir(video_root) # 随机抽取视频 video = random.sample(videos, 1)[0] # 视频输入路径 video_path = os.path.join(video_root, video) # 加载原始视频 clip = VideoFileClip(video_path) # 保存为GIF文件 clip.write_gif("output_animation.gif", fps=10) # 显示生成结果 Image(open('output_animation.gif','rb').read()) MoviePy - Building file output_animation.gif with imageio. 4. Gradio 界面 修改配置文件: %%writefile configs/opensora-v1-1/inference/sample-ref.py num_frames = 16 frame_interval = 3 fps = 24 image_size = (240, 426) multi_resolution = "STDiT2" ​ # Condition prompt_path = None prompt = [ "A car driving on the ocean.", "In an ornate, historical hall, a massive tidal wave peaks and begins to crash. Two surfers, seizing the moment, skillfully navigate the face of the wave.", ] ​ loop = 2 condition_frame_length = 4 # ( # loop id, [the loop index of the condition image or video] # reference id, [the index of the condition image or video in the reference_path] # reference start, [the start frame of the condition image or video] # target start, [the location to insert] # length, [the number of frames to insert] # edit_ratio [the edit rate of the condition image or video] # ) # See https://github.com/hpcaitech/Open-Sora/blob/main/docs/config.md#advanced-inference-config for more details # See https://github.com/hpcaitech/Open-Sora/blob/main/docs/commands.md#inference-with-open-sora-11 for more examples mask_strategy = [ "0,0,0,0,8,0.3", None, "0", ] reference_path = [ "https://cdn.openai.com/tmp/s/interp/d0.mp4", None, "assets/images/condition/wave.png", ] ​ # Define model model = dict( type="STDiT2-XL/2", from_pretrained="hpcai-tech/OpenSora-STDiT-v2-stage3", input_sq_size=512, # 使用huggingface上下载好的模型权重 qk_norm=True, enable_flash_attn=True, enable_layernorm_kernel=True, ) vae = dict( type="VideoAutoencoderKL", from_pretrained="./opensora/models/sd-vae-ft-ema", cache_dir=None, # 修改为从当前目录加载 micro_batch_size=4, ) text_encoder = dict( type="t5", from_pretrained="./opensora/models/text_encoder/t5-v1_1-xxl", cache_dir=None, # 修改为从当前目录加载 model_max_length=200, ) scheduler = dict( type="iddpm", num_sampling_steps=100, cfg_scale=7.0, cfg_channel=3, # or None ) dtype = "fp16" ​ # Others batch_size = 1 seed = 42 save_dir = "./samples/samples/" Overwriting configs/opensora-v1-1/inference/sample-ref.py 修改Gradio应用: %%writefile gradio/app-ref.py import argparse import importlib import os import subprocess import sys import re import json import math import spaces import torch import gradio as gr from tempfile import NamedTemporaryFile import datetime from transformers import pipeline ​ zh2en = pipeline("translation", model="./opus-mt-zh-en") ​ MODEL_TYPES = ["v1.1-stage2", "v1.1-stage3"] CONFIG_MAP = { "v1.1-stage2": "configs/opensora-v1-1/inference/sample-ref.py", "v1.1-stage3": "configs/opensora-v1-1/inference/sample-ref.py", } HF_STDIT_MAP = { "v1.1-stage2": "hpcai-tech/OpenSora-STDiT-v2-stage2", "v1.1-stage3": "hpcai-tech/OpenSora-STDiT-v2-stage3", } RESOLUTION_MAP = { "144p": { "16:9": (256, 144), "9:16": (144, 256), "4:3": (221, 165), "3:4": (165, 221), "1:1": (192, 192), }, "240p": { "16:9": (426, 240), "9:16": (240, 426), "4:3": (370, 278), "3:4": (278, 370), "1:1": (320, 320), }, "360p": { "16:9": (640, 360), "9:16": (360, 640), "4:3": (554, 416), "3:4": (416, 554), "1:1": (480, 480), }, "480p": { "16:9": (854, 480), "9:16": (480, 854), "4:3": (740, 555), "3:4": (555, 740), "1:1": (640, 640), }, "720p": { "16:9": (1280, 720), "9:16": (720, 1280), "4:3": (1108, 832), "3:4": (832, 1110), "1:1": (960, 960), }, } ​ ​ # ============================ # Utils # ============================ def collect_references_batch(reference_paths, vae, image_size): from opensora.datasets.utils import read_from_path ​ refs_x = [] for reference_path in reference_paths: if reference_path is None: refs_x.append([]) continue ref_path = reference_path.split(";") ref = [] for r_path in ref_path: r = read_from_path(r_path, image_size, transform_name="resize_crop") r_x = vae.encode(r.unsqueeze(0).to(vae.device, vae.dtype)) r_x = r_x.squeeze(0) ref.append(r_x) refs_x.append(ref) # refs_x: [batch, ref_num, C, T, H, W] return refs_x ​ ​ def process_mask_strategy(mask_strategy): mask_batch = [] mask_strategy = mask_strategy.split(";") for mask in mask_strategy: mask_group = mask.split(",") assert len(mask_group) >= 1 and len(mask_group) <= 6, f"Invalid mask strategy: {mask}" if len(mask_group) == 1: mask_group.extend(["0", "0", "0", "1", "0"]) elif len(mask_group) == 2: mask_group.extend(["0", "0", "1", "0"]) elif len(mask_group) == 3: mask_group.extend(["0", "1", "0"]) elif len(mask_group) == 4: mask_group.extend(["1", "0"]) elif len(mask_group) == 5: mask_group.append("0") mask_batch.append(mask_group) return mask_batch ​ ​ def apply_mask_strategy(z, refs_x, mask_strategys, loop_i): masks = [] for i, mask_strategy in enumerate(mask_strategys): mask = torch.ones(z.shape[2], dtype=torch.float, device=z.device) if mask_strategy is None: masks.append(mask) continue mask_strategy = process_mask_strategy(mask_strategy) for mst in mask_strategy: loop_id, m_id, m_ref_start, m_target_start, m_length, edit_ratio = mst loop_id = int(loop_id) if loop_id != loop_i: continue m_id = int(m_id) m_ref_start = int(m_ref_start) m_length = int(m_length) m_target_start = int(m_target_start) edit_ratio = float(edit_ratio) ref = refs_x[i][m_id] # [C, T, H, W] if m_ref_start < 0: m_ref_start = ref.shape[1] + m_ref_start if m_target_start < 0: # z: [B, C, T, H, W] m_target_start = z.shape[2] + m_target_start z[i, :, m_target_start : m_target_start + m_length] = ref[:, m_ref_start : m_ref_start + m_length] mask[m_target_start : m_target_start + m_length] = edit_ratio masks.append(mask) masks = torch.stack(masks) return masks ​ ​ def process_prompts(prompts, num_loop): from opensora.models.text_encoder.t5 import text_preprocessing ​ ret_prompts = [] for prompt in prompts: if prompt.startswith("*0*"): prompt_list = prompt.split("*")[1:] text_list = [] for i in range(0, len(prompt_list), 2): start_loop = int(prompt_list[i]) text = prompt_list[i + 1] text = text_preprocessing(text) end_loop = int(prompt_list[i + 2]) if i + 2 < len(prompt_list) else num_loop text_list.extend([text] * (end_loop - start_loop)) assert len(text_list) == num_loop, f"Prompt loop mismatch: {len(text_list)} != {num_loop}" ret_prompts.append(text_list) else: prompt = text_preprocessing(prompt) ret_prompts.append([prompt] * num_loop) return ret_prompts ​ ​ def extract_json_from_prompts(prompts): additional_infos = [] ret_prompts = [] for prompt in prompts: parts = re.split(r"(?=[{\[])", prompt) assert len(parts) <= 2, f"Invalid prompt: {prompt}" ret_prompts.append(parts[0]) if len(parts) == 1: additional_infos.append({}) else: additional_infos.append(json.loads(parts[1])) return ret_prompts, additional_infos ​ ​ # ============================ # Model-related # ============================ def read_config(config_path): """ Read the configuration file. """ from mmengine.config import Config ​ return Config.fromfile(config_path) ​ ​ def build_models(model_type, config, enable_optimization=False): """ Build the models for the given model type and configuration. """ # build vae from opensora.registry import MODELS, build_module ​ vae = build_module(config.vae, MODELS).cuda() ​ # build text encoder text_encoder = build_module(config.text_encoder, MODELS) # T5 must be fp32 text_encoder.t5.model = text_encoder.t5.model.cuda() ​ # build stdit # we load model from HuggingFace directly so that we don't need to # handle model download logic in HuggingFace Space from opensora.models.stdit.stdit2 import STDiT2 ​ stdit = STDiT2.from_pretrained( HF_STDIT_MAP[model_type], enable_flash_attn=enable_optimization, trust_remote_code=True, ).cuda() ​ # build scheduler from opensora.registry import SCHEDULERS ​ scheduler = build_module(config.scheduler, SCHEDULERS) ​ # hack for classifier-free guidance text_encoder.y_embedder = stdit.y_embedder ​ # move modelst to device vae = vae.to(torch.float16).eval() text_encoder.t5.model = text_encoder.t5.model.eval() # t5 must be in fp32 stdit = stdit.to(torch.float16).eval() ​ # clear cuda torch.cuda.empty_cache() return vae, text_encoder, stdit, scheduler ​ ​ def parse_args(): parser = argparse.ArgumentParser() parser.add_argument( "--model-type", default="v1.1-stage3", choices=MODEL_TYPES, help=f"The type of model to run for the Gradio App, can only be {MODEL_TYPES}", ) parser.add_argument("--output", default="./outputs", type=str, help="The path to the output folder") parser.add_argument("--port", default=None, type=int, help="The port to run the Gradio App on.") parser.add_argument("--host", default=None, type=str, help="The host to run the Gradio App on.") parser.add_argument("--share", action="store_true", help="Whether to share this gradio demo.") parser.add_argument( "--enable-optimization", action="store_true", help="Whether to enable optimization such as flash attention and fused layernorm", ) return parser.parse_args() ​ ​ # ============================ # Main Gradio Script # ============================ # as `run_inference` needs to be wrapped by `spaces.GPU` and the input can only be the prompt text # so we can't pass the models to `run_inference` as arguments. # instead, we need to define them globally so that we can access these models inside `run_inference` ​ # read config args = parse_args() config = read_config(CONFIG_MAP[args.model_type]) ​ # make outputs dir os.makedirs(args.output, exist_ok=True) ​ # disable torch jit as it can cause failure in gradio SDK # gradio sdk uses torch with cuda 11.3 torch.jit._state.disable() ​ # import after installation from opensora.datasets import IMG_FPS, save_sample from opensora.utils.misc import to_torch_dtype ​ # some global variables dtype = to_torch_dtype(config.dtype) device = torch.device("cuda") ​ # build model vae, text_encoder, stdit, scheduler = build_models(args.model_type, config, enable_optimization=args.enable_optimization) ​ ​ def run_inference(mode, prompt_text, resolution, aspect_ratio, length, reference_image, seed, sampling_steps, cfg_scale): torch.manual_seed(seed) with torch.inference_mode(): # ====================== # 1. Preparation # ====================== # parse the inputs resolution = RESOLUTION_MAP[resolution][aspect_ratio] ​ # gather args from config num_frames = config.num_frames frame_interval = config.frame_interval fps = config.fps condition_frame_length = config.condition_frame_length ​ # compute number of loops if mode == "Text2Image": num_frames = 1 num_loop = 1 else: num_seconds = int(length.rstrip('s')) if num_seconds <= 16: num_frames = num_seconds * fps // frame_interval num_loop = 1 else: config.num_frames = 16 total_number_of_frames = num_seconds * fps / frame_interval num_loop = math.ceil((total_number_of_frames - condition_frame_length) / (num_frames - condition_frame_length)) ​ # prepare model args if config.num_frames == 1: fps = IMG_FPS ​ model_args = dict() height_tensor = torch.tensor([resolution[0]], device=device, dtype=dtype) width_tensor = torch.tensor([resolution[1]], device=device, dtype=dtype) num_frames_tensor = torch.tensor([num_frames], device=device, dtype=dtype) ar_tensor = torch.tensor([resolution[0] / resolution[1]], device=device, dtype=dtype) fps_tensor = torch.tensor([fps], device=device, dtype=dtype) model_args["height"] = height_tensor model_args["width"] = width_tensor model_args["num_frames"] = num_frames_tensor model_args["ar"] = ar_tensor model_args["fps"] = fps_tensor ​ # compute latent size input_size = (num_frames, *resolution) latent_size = vae.get_latent_size(input_size) ​ # process prompt prompt = zh2en(prompt_text)[0].get("translation_text") prompt_raw = [prompt] print(prompt_raw) prompt_raw, _ = extract_json_from_prompts(prompt_raw) prompt_loops = process_prompts(prompt_raw, num_loop) video_clips = [] ​ # prepare mask strategy if mode == "Text2Image": mask_strategy = [None] elif mode == "Text2Video": if reference_image is not None: mask_strategy = ['0'] else: mask_strategy = [None] else: raise ValueError(f"Invalid mode: {mode}") ​ # ========================= # 2. Load reference images # ========================= if mode == "Text2Image": refs_x = collect_references_batch([None], vae, resolution) elif mode == "Text2Video": if reference_image is not None: # save image to disk from PIL import Image im = Image.fromarray(reference_image) ​ with NamedTemporaryFile(suffix=".jpg") as temp_file: im.save(temp_file.name) refs_x = collect_references_batch([temp_file.name], vae, resolution) else: refs_x = collect_references_batch([None], vae, resolution) else: raise ValueError(f"Invalid mode: {mode}") ​ # 4.3. long video generation for loop_i in range(num_loop): # 4.4 sample in hidden space batch_prompts = [prompt[loop_i] for prompt in prompt_loops] z = torch.randn(len(batch_prompts), vae.out_channels, *latent_size, device=device, dtype=dtype) ​ # 4.5. apply mask strategy masks = None ​ # if cfg.reference_path is not None: if loop_i > 0: ref_x = vae.encode(video_clips[-1]) for j, refs in enumerate(refs_x): if refs is None: refs_x[j] = [ref_x[j]] else: refs.append(ref_x[j]) if mask_strategy[j] is None: mask_strategy[j] = "" else: mask_strategy[j] += ";" mask_strategy[ j ] += f"{loop_i},{len(refs)-1},-{condition_frame_length},0,{condition_frame_length}" ​ masks = apply_mask_strategy(z, refs_x, mask_strategy, loop_i) ​ # 4.6. diffusion sampling # hack to update num_sampling_steps and cfg_scale scheduler_kwargs = config.scheduler.copy() scheduler_kwargs.pop('type') scheduler_kwargs['num_sampling_steps'] = sampling_steps scheduler_kwargs['cfg_scale'] = cfg_scale ​ scheduler.__init__( **scheduler_kwargs ) samples = scheduler.sample( stdit, text_encoder, z=z, prompts=batch_prompts, device=device, additional_args=model_args, mask=masks, # scheduler must support mask ) samples = vae.decode(samples.to(dtype)) video_clips.append(samples) ​ # 4.7. save video if loop_i == num_loop - 1: video_clips_list = [ video_clips[0][0]] + [video_clips[i][0][:, config.condition_frame_length :] for i in range(1, num_loop) ] video = torch.cat(video_clips_list, dim=1) current_datetime = datetime.datetime.now() timestamp = current_datetime.timestamp() save_path = os.path.join(args.output, f"output_{timestamp}") saved_path = save_sample(video, save_path=save_path, fps=config.fps // config.frame_interval) return saved_path ​ @spaces.GPU(duration=200) def run_image_inference(prompt_text, resolution, aspect_ratio, length, reference_image, seed, sampling_steps, cfg_scale): return run_inference("Text2Image", prompt_text, resolution, aspect_ratio, length, reference_image, seed, sampling_steps, cfg_scale) ​ @spaces.GPU(duration=200) def run_video_inference(prompt_text, resolution, aspect_ratio, length, reference_image, seed, sampling_steps, cfg_scale): return run_inference("Text2Video", prompt_text, resolution, aspect_ratio, length, reference_image, seed, sampling_steps, cfg_scale) ​ ​ def main(): # create demo with gr.Blocks() as demo: with gr.Row(): with gr.Column(): gr.HTML("""<h1 align="center">Open-Sora 1.1</h1>""") ​ with gr.Row(): with gr.Column(): prompt_text = gr.Textbox( label="Prompt", placeholder="请输入中文提示词", lines=4, ) resolution = gr.Radio( choices=["144p", "240p", "360p", "480p", "720p"], value="240p", label="Resolution", ) aspect_ratio = gr.Radio( choices=["9:16", "16:9", "3:4", "4:3", "1:1"], value="9:16", label="Aspect Ratio (H:W)", ) length = gr.Radio( choices=["2s", "4s", "8s", "16s"], value="2s", label="Video Length (only effective for video generation)", info="8s may fail as Hugging Face ZeroGPU has the limitation of max 200 seconds inference time." ) ​ with gr.Row(): seed = gr.Slider( value=1024, minimum=1, maximum=2048, step=1, label="Seed" ) ​ sampling_steps = gr.Slider( value=100, minimum=1, maximum=200, step=1, label="Sampling steps" ) cfg_scale = gr.Slider( value=7.0, minimum=0.0, maximum=10.0, step=0.1, label="CFG Scale" ) reference_image = gr.Image( label="Reference Image (Optional)", ) with gr.Column(): output_video = gr.Video( label="Output Video", height="100%" ) ​ with gr.Row(): image_gen_button = gr.Button("Generate image") video_gen_button = gr.Button("Generate video") ​ image_gen_button.click( fn=run_image_inference, inputs=[prompt_text, resolution, aspect_ratio, length, reference_image, seed, sampling_steps, cfg_scale], outputs=reference_image ) video_gen_button.click( fn=run_video_inference, inputs=[prompt_text, resolution, aspect_ratio, length, reference_image, seed, sampling_steps, cfg_scale], outputs=output_video ) ​ # launch demo.launch(share=True, inbrowser=True) ​ if __name__ == "__main__": main() Writing gradio/app-ref.py 运行Gradio应用,运行成功后点击 Running on public URL 后的网页链接即可体验! !python gradio/app-ref.py /home/ma-user/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages/torch/_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() return self.fget.__get__(instance, owner)() /home/ma-user/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages/transformers/models/marian/tokenization_marian.py:197: UserWarning: Recommended: pip install sacremoses. warnings.warn("Recommended: pip install sacremoses.") /home/ma-user/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages/colossalai/shardformer/layer/normalization.py:45: UserWarning: Please install apex from source (https://github.com/NVIDIA/apex) to use the fused layernorm kernel warnings.warn("Please install apex from source (https://github.com/NVIDIA/apex) to use the fused layernorm kernel") Loading checkpoint shards: 100%*██████████████████* 2/2 [00:32<00:00, 16.15s/it] /home/ma-user/anaconda3/envs/python-3.10.10/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. warnings.warn( Running on local URL: http://127.0.0.1:7860 Running on public URL: https://64147712240bbb3753.gradio.live ​ This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces) 我们也准备了一些提示词以供参考: 一只穿着紫色长袍的胖兔子穿过奇幻的风景 海浪冲击着孤零零的灯塔,不祥的灯光 一个神秘的森林展示了旅行者的冒险经历 一个蓝头发的法师在唱歌 一个超现实的景观,漂浮的岛屿和空中的瀑布 一只蓝鸟站在水里 一个年轻人独自走在海边 粉红色的玫瑰在玻璃表面滴,特写 驱车远眺,一列地铁正从隧道中驶出 太空中所有的行星都是绿色和粉色的,背景是明亮的白色恒星 一座漂浮在星体空间的城市,有星星和星云 高楼顶上的日出 粉色和青色粉末爆炸 树林里的鹿在阳光下凝视着相机 一道闪电,一个巫师从稀薄的空气中出现了,他的长袍在风中翻腾 夜晚的未来赛博朋克城市景观,高耸的霓虹灯照亮的摩天大楼 在这里,树木、花朵和动物聚集在一起,谱写出一曲大自然的交响乐 一艘幽灵般的船在云层中航行,在月光下的天空中航行 日落和美丽的海滩 一个年轻人独自走在森林里 生成好的视频也可以使用MusicGen进行配乐,使用AI进行短视频创作。 5. 视频效果展示 提示词:一个极端的特写一个头发花白的胡子的男人在他的60年代,他在思想深处思考宇宙的历史,他坐在一家咖啡馆在巴黎,他的眼睛关注人私生活方面大多像他坐在他们走不动,他穿着一件羊毛外套西装外套和一件衬衫,他穿着一件棕色的贝雷帽,眼镜和有一个非常专业的外表,和结束他提供了一个微妙的封闭式的笑容好像找到了答案的神秘生活,灯光非常电影化,金色的灯光和巴黎的街道和城市作为背景,景深,电影化的35mm胶片。 提示词:无人机拍摄的海浪冲击着大苏尔加雷角海滩上崎岖的悬崖。蓝色的海水拍打着白色的波浪,夕阳的金色光芒照亮了岩石海岸。远处有一座小岛,岛上有一座灯塔,悬崖边上长满了绿色的灌木丛。从公路到海滩的陡峭落差是一个戏剧性的壮举,悬崖的边缘突出在海面上。这是一幅捕捉到海岸原始美景和太平洋海岸公路崎岖景观的景色。 提示词:一段高耸的无人机镜头捕捉到了海岸悬崖的雄伟之美,它的红色和黄色分层岩石表面色彩丰富,映衬着充满活力的绿松石般的大海。可以看到海鸟在悬崖峭壁上飞翔。当无人机从不同角度缓慢移动时,变化的阳光投射出移动的阴影,突出了悬崖的崎岖纹理和周围平静的大海。水轻轻地拍打着岩石基座和附着在悬崖顶部的绿色植物,这一场景给人一种宁静的感觉,在海洋的边缘孤立。这段视频捕捉了未受人类建筑影响的原始自然美的本质。 ​ ​ 提示词:雄伟美丽的瀑布从悬崖上倾泻而下,进入宁静的湖泊。瀑布,以其强大的流量,是视频的中心焦点。周围的景色郁郁葱葱,树木和树叶增添了自然美景。相机角度提供了瀑布的鸟瞰图,让观众欣赏瀑布的全部高度和壮观。这段视频令人惊叹地展现了大自然的力量和美。 ​ 提示词:夜晚熙熙攘攘的城市街道,充满了汽车前灯的光辉和街灯的环境光。场景是一个模糊的运动,汽车飞驰而过,行人在人行横道上穿行。城市景观是高耸的建筑和照明标志的混合,创造了一个充满活力和动态的氛围。视频的视角是高角度的,提供了街道及其周围环境的鸟瞰图。整个视频的风格充满活力和活力,捕捉到了夜晚城市生活的精髓。 ​ 提示词:森林地区宁静的夜景。第一个画面是一个宁静的湖泊,倒映着繁星满天的夜空。第二帧展示了美丽的日落,在风景上投下温暖的光芒。第三帧展示了夜空,充满了星星和充满活力的银河系。这段视频是延时拍摄的,捕捉了从白天到夜晚的过渡,湖泊和森林作为恒定的背景。视频的风格是自然主义的,强调夜空的美丽和森林的宁静。 点击关注,第一时间了解华为云新鲜技术~

优秀的个人博客,低调大师

用 Spring Boot 3.2 虚拟线程搭建静态文件服务器有多快?

Spring Boot 3.2 于 2023 年 11 月大张旗鼓地发布,标志着 Java 开发领域的一个关键时刻。这一突破性的版本引入了一系列革命性的功能,包括: 虚拟线程:利用 Project Loom 的虚拟线程释放可扩展性,从而减少资源消耗并增强并发性。 Native Image支持:通过Native Image编译制作速度极快的应用程序,减少启动时间并优化资源利用率。 JVM 检查点:利用 CRaC 项目的 JVM 检查点机制实现应用程序的快速重启,无需冗长的重新初始化。 RestClient:采用新的 RestClient 接口的功能方法,简化 HTTP 交互并简化代码。 Spring for Apache Pulsar:利用 Apache Pulsar 的强大功能实现强大的消息传递功能,无缝集成到您的 Spring Boot 应用程序中。 其中,虚拟线程是最近 Java 版本中引入的最具变革性的特性之一。正如官方文件所述:虚拟线程是轻量级线程,可减少编写、维护和调试高吞吐量并发应用程序的工作量。线程是可以调度的最小处理单元。它与其他此类单位同时运行,并且在很大程度上独立于其他此类单元运行。它是 java.lang.Thread 的一个实例。有两种线程:平台线程和虚拟线程。平台线程是作为操作系统 (OS) 线程的瘦包装器实现的。平台线程在其底层操作系统线程上运行 Java 代码,平台线程在平台线程的整个生命周期内捕获其操作系统线程。因此,可用平台线程数限制为操作系统线程数。与平台线程一样,虚拟线程也是 java.lang.Thread 的实例。但是,虚拟线程不绑定到特定的操作系统线程。虚拟线程仍在操作系统线程上运行代码。但是,当在虚拟线程中运行的代码调用阻塞 I/O 操作时,Java 运行时会挂起虚拟线程,直到它可以恢复为止。与挂起的虚拟线程关联的操作系统线程现在可以自由地对其他虚拟线程执行操作。虚拟线程适用于运行大部分时间被阻塞的任务,通常等待 I/O 操作完成。但是,它们不适用于长时间运行的 CPU 密集型操作。 虽然人们普遍认为虚拟线程在 I/O 密集型方案中表现出色,但它们在 CPU 密集型任务中的性能仍然是一个问号。本系列文章深入探讨了虚拟线程在各种用例中的潜在优势,从基本的“hello world”到静态文件服务(I/O 密集型)、QR 码生成(CPU 密集型)和多部分/表单数据处理(混合工作负载)等实际应用。 在本系列的开头文章中,我们已经了解了虚拟线程与物理线程相比在最简单(且不切实际)的 hello world 情况下的性能。物理线程和虚拟线程之间几乎没有任何性能或资源使用差异。在本文中,我们将更加“实用”,并针对静态文件服务器情况进行比较。这绝对是一个常见且“真实世界”的案例。让我们看看这次我们发现了什么。 如果大家正在做Spring Boot 2.3升级Spring 3.2,这里顺手给大家推荐Spring Boot 2.x 到 3.2 的升级指南。 测试环境 所有测试均在配备 16G RAM、8 个物理内核和 4 个效率内核的 MacBook Pro M2 上执行。测试工具是 Bombardier,它是更快的 HTTP 负载测试器之一(用 Go 编写)。 软件版本为: Java v21.0.1 Spring Boot 3.2.1 程序配置 除了主 Java 类之外,不需要编写任何 Java 文件,静态文件服务器只能通过配置就能发挥作用。 application.properties文件如下: server.port=3000 spring.mvc.static-path-pattern=/static/** spring.web.resources.static-locations=file:/Users/mayankc/Work/source/perfComparisons/static/ 使用虚拟线程时,我们将通过添加以下行来启用它们: spring.threads.virtual.enabled=true pom.xml内容: <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>3.2.1</version> <relativePath/> </parent> <groupId>com.example</groupId> <artifactId>demo</artifactId> <version>0.0.1-SNAPSHOT</version> <name>demo</name> <description>Demo project for Spring Boot</description> <properties> <java.version>21</java.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> </dependencies> 测试数据 大小完全相同但数据不同的 100K 文件被放置在静态资源目录中。每个文件大小正好是 102400 字节。 文件的命名范围为 1 到 100000。 使用 Bombardier 的修改版本,为每次运行生成随机请求 URL: http://localhost:3000/static/<file-name> 应用场景 为了确保结果一致,每个测试在开始数据收集之前都会经历 5K 请求预热阶段。 然后,在不同范围的并发连接级别(50、100 和 300)中仔细记录测量结果,每个级别都承受 500 万个请求工作负载。 结果评估 除了简单地跟踪原始速度之外,我们还将采用详细的指标框架来捕获延迟分布(最小值、百分位数、最大值)和吞吐量(每秒请求数)。 CPU 和内存的资源使用情况监控将补充此分析,从而提供不同工作负载下系统性能的全面了解。 测试结果 结果以图表形式呈现如下: 总结 对静态文件服务的分析表明,物理线程在性能和资源效率方面略胜一筹(与我们的预期相反)。 不过,这种受 I/O 限制的场景可能并不是充分发挥虚拟线程潜力的理想场所。涉及数据库交互的任务可能会显示出更多令人信服的优势。也许负载不足以让虚拟线程发挥出最大的作用。为了找出答案,我们将在接下来的文章中介绍 URL短链(数据库驱动)、二维码生成(CPU受限)和混合工作负载场景(如表单数据处理),旨在揭示虚拟线程真正出类拔萃的案例。 欢迎关注我的公众号:程序猿DD。第一时间了解前沿行业消息、分享深度技术干货、获取优质学习资源

优秀的个人博客,低调大师

​PaddleHub新增100+优质模型,少量代码快速搭建AI魔镜带你对话未来!

你是否也曾经幻想能和未来的自己对话,问问未来的自己过的怎么样,当初的梦想都实现了吗?现在PaddleHub可以帮你轻松实现,先快速看下效果。 这个项目中用到了PaddleHub中的最新模型,分别用到了语音识别、图像生成、对话系统、声音克隆、唇纹生成等能力,通过短短十几行代码实现了和未来的自己进行对话。 那么这么厉害的项目是如何实现的呢?我们先来看下技术拆解。 首先我们要模拟未来自己的样貌,也就是生成人脸变老的图像。我们选择了一个能够获得变老效果的模型styleganv2_editing,这个模型是PaddleGAN贡献的一个高质量人脸生成的模型,可以对年龄、性别、眼距等等十几种人脸属性进行编辑,能力十分强大。 第二步,我们要实现人机对话,需要先将一段音频识别成文字,再根据识别出的文字,生成回应的内容。于是我们在PaddleHub的模型库中选择了来源于PaddleSpeech的效果较好的中文语音识别模型u2_conformer_wenetspeech和来源于PaddleNLP的百度自研的基于十亿级别的中文对话数据plato-mini。 接下来,我们要把回应的内容变成拥有自己音色的语音。这就要用到来源于PaddleSpeech的最新的声音克隆模型ge2e_fastspeech2_pwgan了,我们输入对未来的寄语的音频和刚刚生成的对话文本,即可得到克隆出来的音频。到这步我们就已经实现跟自己对话了。 最后我们需要将语音和图像结合起来,让变老的我们开口说话。这步我们用到来源于PaddleGAN的wav2lip模型,经过以下两行代码,载入模型和模型预测,即可把我们未来的样貌和对话的语音,合成一个能对的上口型的视频了。 完整项目地址: https://aistudio.baidu.com/aistudio/projectdetail/3216625 PaddleHub是飞桨官方推出的预训练模型应用工具。入门简单,代码简洁,模型丰富,会python就能用,十行代码搞定预测,场景模型丰富实用。不论是入门,还是想玩出各种花样的应用项目,PaddleHub都非常合适。 近期,PaddleHub新增了100多个新模型,支持声音克隆、声音分类和检测、语音合成(TTS)、语音识别(ASR)、机器翻译、同声传译、视频多目标追踪、唇纹生成等等任务,让我们一起来看看PaddleHub的效果如何。 图像领域:212个模型。包括图像分类、人脸检测、口罩检测、车辆检测、关键点检测、人像分割、语言文本识别、图像超分/上色/动漫化等任务模型,实现效果见下图。 文本领域:130个模型。包括中文分词、词性标注、句法分析、AI写诗/对联/情话/藏头诗、情感分析、文本审核、机器翻译、同声传译等,实现效果见下图。 目前,PaddleHub中的模型已经覆盖5大领域、包含数十种任务、囊括360+模型,都是精选效果好又实用的模型,而且所有模型,都可以在10行代码以内完成预测。 360+模型,丰富实用 新增模型库索引页,360+模型全景概览,分类清晰明了,让你能够迅速找到心仪模型。 十行以内一键预测 文档中的预测代码和API信息整洁清晰,复制即用。 一行代码实现服务化部署 部署指令简单,教程详细,轻松部署。 怎么样,有没有心动? PaddleHub目前累计star数量已超过7.2K,频频登上Github Trending日榜月榜,used by 570+,还有不少小伙伴频频贡献,也是深受开发者喜爱 画重点:入门快,代码少,模型多,超实用 同时也项目目前有很多外部开发者参与建设,也欢迎大家一起参与和贡献 如果觉得效果不错,也欢迎给项目点点star加个关注。 https://github.com/PaddlePaddle/PaddleHub 另外,repo中也贴心的给出了官方微信群,有问题可以添加“飞桨小助手”回复“hub”获得进群链接,技术讨论与答疑更高效。同时,对PaddleHub感兴趣也希望做出更多有趣事情的同学也可以回复“sig”加入PPSIG Addons-Hub小组。 PaddleHub地址: GitHub: https://github.com/PaddlePaddle/PaddleHub Gitee: https://gitee.com/paddlepaddle/PaddleHub 官网地址: https://www.paddlepaddle.org.cn/hub 关注公众号,获取更多技术内容~ 本文同步分享在 博客“飞桨PaddlePaddle”(CSDN)。如有侵权,请联系 support@oschina.cn 删除。本文参与“OSC源创计划”,欢迎正在阅读的你也加入,一起分享。

优秀的个人博客,低调大师

2.CAS SSO单点登录服务端环境搭建之源码详细解说

1.因为是本地模拟sso环境,而sso的环境测试需要域名,所以需要虚拟几个域名出来,步骤如下: 2.进入目录C:\Windows\System32\drivers\etc 需要源码点击这里 ​ 3.修改hosts文件 127.0.0.1 jeesz.cn 127.0.0.1 sso1.jeesz.cn 127.0.0.1 sso2.jeesz.cn 4.生成认证证书 注意:我们可以根据sso单点登录的架构图可以看到,在客户端和服务端进行交互的时候,是需要认证的,在这里我们使用jdk中的keytool方案生成证书(一般正式环境,使用正规机构颁发的证书,可以去购买) (一)创建一个caskey目录,如:D:\sso-cas\caskey (二)打开cmd的命令窗口: ​ (三)进入jdk bin的安装目录 ​ (四)使用keytool -genkey -alias mycacerts -keyalg RSA -keystore D:/sso-cas/caskey/keycard 生成key ​ ​ 注意:输入相关信息用于生成证书. 其中名字与姓氏使用你的域名,保持和C:\Windows\System32\drivers\etc\hosts 文件中映射域名相同,注意不要写成IP。 在这里我使用的口令是minglisoft(这个要记住,后面还会使用到) 名字姓氏使用的是jeesz.cn(对应host中的域名配置) (五)使用命令keytool -export -file D:/sso-cas/caskey/keycard.crt -alias mycacerts -keystore D:/sso-cas/caskey/keycard导出证书: ​ ​ 注意:口令为minglisoft (六)将证书导入到 JDK 安装目录 Keytool -import -keystore C:/"Program Files"/Java/jdk1.8.0_131/jre/lib/security/cacerts -file D:/sso-cas/caskey/keycard.crt -alias mycacerts 备注:我已经帮忙生成了,最终要将cacerts拷贝到C:\Program Files\Java\jdk1.8.0_131\jre\lib\security目录即可 ​ 注意口令:minglisoft 5. 准备好服务端的tomcat容器,这里使用的是tomcat8进行测试(首先要保证tomcat启动后是可以运行的),如下: ​ 6.修改server.xml文件,如下: ​ ​ 注意: 这里使用的是https的认证方式,需要将这个配置放开,并做如下修改: <Connector port="8443" protocol="org.apache.coyote.http11.Http11NioProtocol" maxThreads="150" SSLEnabled="true" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" keystoreFile="D:/sso-cas/caskey/keycard" keystorePass="minglisoft" /> 注意: keystoreFile="D:/sso-cas/caskey/keycard" --证书了路径 keystorePass="minglisoft" --证书密码 7.测试https的8443端口是否可以访问:https://localhost:8443 ​ ​ 配置没有问题 8.可以配置只通过域名访问,还是修改server.xml文件,将localhost的配置修改为jeesz.cn如下: <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> <Host name="jeesz.cn" appBase="webapps" unpackWARs="true" autoDeploy="true"> 重启tomcat容器,访问如下:http://jeesz.cn:8080 ​ 9.将cas-server-webapp-4.2.7.war包拷贝到tomcat容器中,并命名为cas.war如下: ​ ​ 10.重启启动tomcat容器,访问cas, https://jeesz.cn:8443/cas 默认用户名为:casuser 默认密码为:Mellon ​ ​ 到此cas的server测试成功!!

优秀的个人博客,低调大师

开源可视化搭建平台-码良新功能桌面端页面制作

码良新技能 桌面端页面制作 最近码良迎来了一波新功能的上线,并毫无保留地继续进行了代码开源-赶紧star关注 。新上线的功能有桌面端页面制作、”楼层模式“、”正宗“布局组件(容器组件)的开发能力、组件封装、word 文档解析、psd 解析等平台功能和能力。 资源难得赶紧先关注,然后后续学习。 本文将围绕一些具体例子来介绍码良桌面端页面制作功能。 选择画布 画布是码良新引入的一个概念,指的是场景区内用于组件摆放的一个区域,画布的高度可拓展,但是宽度是不能伸缩的。渲染器的页面适配主要是对不同宽度画布生成的页面在不同宽度设备上的适配,因此,我们主要关注画布宽度。 引入画布概念以后,码良内置了 4 个画布,其中三个为移动端画布,剩下一个为桌面端画布。参见下图。 由于制作的是桌面端页面,首先我们点击选择最宽的那个画布指示条,切换到桌面画布。 这时,会询问是否需要切换到”楼层模式“ 那么什么是楼层模式呢 楼层模式 我们看个简单的例子 可以看到,在属性面板可以为某个组件设置布局模式。布局模式决定了其直接子组件的位置排布方式,当设置为”楼层模式“时,子组件如楼层般依次向下扩展,当设置为”自由模式“时,子组件上下堆叠,位置可自由挪动。 实践下来,”楼层模式“更适合桌面端页面的制作。所以当切换画布到桌面画布时,如果根组件(root)的布局模式不是”楼层模式“,则会询问是否需要切换到楼层模式,也就是上节出现的情况。 高度自动 设置了”楼层模式“的组件,因其子组件依次向下排布,我们可以将该组件的高度禁用(样式面板-高-禁用,值实际被设置为auto),这样组件就可以根据子组件的累加高度自动获得高度。 横向布局 “楼层模式”方便了组件沿页面纵向的布局,那么需要组件横向排布时如何做呢?多栏布局组件登场。 页面结构 桌面端页面的内容组织方式通常为顶栏、内容区、页脚,也可能会有一个或多个侧边栏。 结合楼层模式和多栏布局组件,我们可以轻松实现以上各种页面结构。以上图中左下角的页面结构为例,使用码良来实现它。 可以看到,我们通过使用”空容器“组件和”多栏布局组件“,结合上文提到的”楼层模式“纵向布局和”多栏组件“提供的横向布局能力,再辅以”高度自动“技巧,成功实现了一个含侧边栏、顶栏、主内容区、页脚的页面布局。 填充内容 制作了页面骨架以后,根据实际需要向页面各个部分添加内容。 由于目前专门为桌面画布开发的组件还比较少,在制作桌面端页面时,需要灵活组合基础组件(如图片、文本、富文本等)来实现一些复杂的效果和功能。 现在,码良团队正在加紧开发桌面组件,随着这些封装程度更高、使用更简便的组件面世,桌面页面制作将大大提速。 也希望读者能参与到这一过程中,积极向码良组件商城贡献组件,点点 star 也行。 设备适配 不同桌面端设备的宽度存在差异,在制作桌面端页面时需要考虑页面在不同设备上的实际显示效果。不同于移动端页面能通过简单的缩放视口来实现设备适配,码良对于桌面端页面没有采取任何主动适配策略。那么针对具体页面,如果实现合适的显示效果呢? 考虑到大部分桌面端页面的内容区宽度相对固定,比如 1140px、960px,那么在制作这些内容的时候无需特殊处理,直接使用绝对单位 px (默认单位)即可。另外,内容区一般是居中的,考虑到这种需求的广范性,码良特意提供了一个”居中“属性,为组件开启此属性即可实现自动动态居中效果。 对于依赖设备宽度的组件展示效果,比如通栏导航栏,不能再使用组件样式默认的绝对单位,应该将大小和位置样式的单位切换为”%“,并手动输入数值。 除了平台层面提供的绝对单位和相对单位适配方案,组件自身也可以通过”媒体查询“等技术做成响应式的,在不同宽度设备上展示出不同的形态。 demo 展示 最终,”你得先这样,然后再这样,然后再这样,最后再这样“,一个美美的 demo 页面就制作好了。 期待 pc端的能力支持后。我们也在尝试使用码良做数据大屏,我们发现码良平台在做数据大屏上有独特的优势。可以在后期我们的相关组件更加丰富后再出文章说明。以下是码良做大屏的一些尝试,页面能很好的进行适配,也能很好的接入一些第三方图表组件。 演示地址-点击查看效果 支持我们 开源项目需要的是持续地坚持,而我们坚持的动力当然也来自于你们的支持 来都来了,加个关注再走吧 项目代码:https://github.com/ymm-tech/gods-pen 项目代码(国内镜像):https://gitee.com/ymm-tech/gods-pen

资源下载

更多资源
优质分享App

优质分享App

近一个月的开发和优化,本站点的第一个app全新上线。该app采用极致压缩,本体才4.36MB。系统里面做了大量数据访问、缓存优化。方便用户在手机上查看文章。后续会推出HarmonyOS的适配版本。

Mario

Mario

马里奥是站在游戏界顶峰的超人气多面角色。马里奥靠吃蘑菇成长,特征是大鼻子、头戴帽子、身穿背带裤,还留着胡子。与他的双胞胎兄弟路易基一起,长年担任任天堂的招牌角色。

Nacos

Nacos

Nacos /nɑ:kəʊs/ 是 Dynamic Naming and Configuration Service 的首字母简称,一个易于构建 AI Agent 应用的动态服务发现、配置管理和AI智能体管理平台。Nacos 致力于帮助您发现、配置和管理微服务及AI智能体应用。Nacos 提供了一组简单易用的特性集,帮助您快速实现动态服务发现、服务配置、服务元数据、流量管理。Nacos 帮助您更敏捷和容易地构建、交付和管理微服务平台。

Sublime Text

Sublime Text

Sublime Text具有漂亮的用户界面和强大的功能,例如代码缩略图,Python的插件,代码段等。还可自定义键绑定,菜单和工具栏。Sublime Text 的主要功能包括:拼写检查,书签,完整的 Python API , Goto 功能,即时项目切换,多选择,多窗口等等。Sublime Text 是一个跨平台的编辑器,同时支持Windows、Linux、Mac OS X等操作系统。

用户登录
用户注册