SD安装笔记

./webui.sh --server-name=0.0.0.0 --listen --device-id 1

set COMMANDLINE_ARGS=–share

/etc/apt/sources.list.d/cuda-ubuntu2204-12-2-local.list
如果要删除nividia驱动,这个源必须删除。

wget https://developer.download.nvidia.com/compute/cuda/12.2.0/local_installers/cuda_12.2.0_535.54.03_linux.run
sudo sh cuda_12.2.0_535.54.03_linux.run

run文件删除,是成功率比较大的

https://developer.nvidia.com/cuda-12-2-0-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=22.04&target_type=runfile_local

CUDA的下载页面

sudo vim /etc/modprobe.d/blacklist-nouveau.conf

blacklist nouveau
options nouveau modeset=0

sudo update-initramfs -u

lsmod | grep nouveau

禁用nouveau

sudo ubuntu-drivers devices

查看已经安装的驱动

 import torch
 torch.version

‘2.1.0+cu121’

 torch.cuda.is_available()

查看Torch版本

 import torch
 torch.version

‘2.1.0+cu121’

 torch.cuda.is_available()

查看Torch版本

https://blog.csdn.net/zxdd2018/article/details/127705627

cuDNN的安装

sudo apt-get -f install
修复依赖问题,但是大多数的时候不太好用, 还是得删除了,重新装最干净

sudo apt-get upgrade

cannot import name ‘_compare_version‘ from ‘torchmetrics.utilities.imports‘

https://zhuanlan.zhihu.com/p/619901627
比较全的安装CUDA的方法。

export CUDA_VISIBLE_DEVICES=0,1,2,3

pip config set global.index-url https://mirrors.aliyun.com/pypi/simple

还是不行,然后换成下面的命令:

pip install open-clip-torch
open-clip-torch 使用 阿里的镜像好用

–xformers
–reinstall-torch

sudo .run --silent --driver

CUDA_VISIBLE_DEVICES=0,1,2,3 python launch.py --share

No module ‘xformers’. Proceeding without it.

pip install xformers==0.0.16
pip install xformers==0.0.23.post1

装0.0.16版本,装高版本的没法用。


Installing collected packages: torch, xformers

Attempting uninstall: torch

Found existing installation: torch 1.13.1+cu117

Uninstalling torch-1.13.1+cu117:

Successfully uninstalled torch-1.13.1+cu117

Attempting uninstall: xformers

Found existing installation: xformers 0.0.16

Uninstalling xformers-0.0.16:

Successfully uninstalled xformers-0.0.16
set COMMANDLINE_ARGS=--xformers

webui.sh --xformers

使用如下指令启动webui时,报错ImportError: cannot import name ‘_compare_version’ from ‘torchmetrics.utilities.imports’ (/root/.conda/envs/aigc/lib/python3.10/site-packages/torchmetrics/utilities/imports.py),显然这是版本兼容性问题。

将torchmetrics版本降低到0.11.4,推荐使用指令:

pip install torchmetrics==0.11.4

webui.sh --xformers

安装xformers 需要使用llvm的clang++

export LDFLAGS=“-L/usr/local/opt/llvm/lib -Wl,-rpath,/usr/local/opt/llvm/lib”
export CPPFLAGS=“-I/usr/local/opt/llvm/include”
export CC=“/usr/local/opt/llvm/bin/clang”
export CXX=“/usr/local/opt/llvm/bin/clang++”

export CC=“/usr/bin/gcc”
export CXX=“/usr/bin/g++”

To use the bundled libc++ please add the following LDFLAGS:

LDFLAGS=“-L/opt/homebrew/opt/llvm/lib/c++ -L/opt/homebrew/opt/llvm/lib -lunwind”

llvm is keg-only, which means it was not symlinked into /opt/homebrew,

because macOS already provides this software and installing another version in

parallel can cause all kinds of trouble.

If you need to have llvm first in your PATH, run:

echo ‘export PATH=“/opt/homebrew/opt/llvm/bin:$PATH”’ >> ~/.zshrc

For compilers to find llvm you may need to set:

export LDFLAGS=“-L/opt/homebrew/opt/llvm/lib”

export CPPFLAGS=“-I/opt/homebrew/opt/llvm/include”

==> Summary

🍺 /opt/homebrew/Cellar/llvm/19.1.0: 8,068 files, 1.9GB

==> Running brew cleanup llvm

Disable this behaviour by setting HOMEBREW_NO_INSTALL_CLEANUP.

Hide these hints with HOMEBREW_NO_ENV_HINTS (see man brew).

Removing: /opt/homebrew/Cellar/llvm/18.1.8… (7,722 files, 1.8GB)

Removing: /Users/shengyang1/Library/Caches/Homebrew/llvm_bottle_manifest–18.1.8… (38.2KB)

==> Caveats

==> llvm

To use the bundled libc++ please add the following LDFLAGS:

LDFLAGS=“-L/opt/homebrew/opt/llvm/lib/c++ -L/opt/homebrew/opt/llvm/lib -lunwind”

llvm is keg-only, which means it was not symlinked into /opt/homebrew,

because macOS already provides this software and installing another version in

parallel can cause all kinds of trouble.

If you need to have llvm first in your PATH, run:

echo ‘export PATH=“/opt/homebrew/opt/llvm/bin:$PATH”’ >> ~/.zshrc

For compilers to find llvm you may need to set:

export LDFLAGS=“-L/opt/homebrew/opt/llvm/lib”

export CPPFLAGS=“-I/opt/homebrew/opt/llvm/include”

LLVM安装后显示的建议配置
export LDFLAGS=“-L/opt/homebrew/opt/llvm/lib -Wl,-rpath,/opt/homebrew/opt/llvm/lib”
export CPPFLAGS=“-I/opt/homebrew/opt/llvm/include”
export CC=“/opt/homebrew/opt/llvm/bin/clang”
export CXX=“/opt/homebrew/opt/llvm/bin/clang++”

查看pytorch版本号

python -c “import torch; print(torch.version)”

pip3 install xformers==0.0.26.post1 --dry-run 这句话中的–dry-run是什么意思

--dry-run 是一个选项,用于在执行某些命令时进行“演练”或“模拟”操作,而不实际执行任何更改。在你提供的命令中:

pip3 install xformers==0.0.26.post1 --dry-run

这个命令的意思是:尝试安装 xformers 版本 0.0.26.post1,但是不会真正执行安装操作。它会显示如果实际执行安装的话,将会发生什么,比如依赖关系、需要下载的文件等,但不会将其下载或安装。这对于测试和确认操作是安全的,避免了可能的问题。

请注意,并非所有版本的 pip 都支持 --dry-run 选项,所以你可能需要查看特定版本的文档来确认该选项是否可用。

尝试安装那个版本可以安装使用

pip3 install xformers0.0.26.post1 --dry-run
pip3 install xformers
0.0.25 --dry-run

pip3 install xformers0.0.20 --dry-run
pip3 install xformers
0.0.19 --dry-run

xformers pytorch
0.0.26.post1 torch2.3.0
0.0.25 torch
2.2.1
0.0.24 torch2.2.0
0.0.23 torch
2.1.1
0.0.22 torch2.0.1
0.0.21 torch
2.0.1
0.0.20 torch2.0.1
0.0.19 torch
2.0.0
0.0.18 torch2.0.0
0.0.17 torch
2.0.0
0.0.16 torch==1.13.1

torch 1.12.1安装 0.0.16版本

对应安装 llvm@13
brew install llvm@13

/opt/homebrew/Cellar/llvm@13/lib
/opt/homebrew/Cellar/llvm@13/13.0.1_2

生效配制

export LDFLAGS=“-L/opt/homebrew/Cellar/llvm@13/13.0.1_2/lib -Wl,-rpath,/opt/homebrew/Cellar/llvm@13/13.0.1_2/lib”
export CPPFLAGS=“-I/opt/homebrew/Cellar/llvm@13/13.0.1_2/include”
export CC=“/opt/homebrew/Cellar/llvm@13/13.0.1_2/bin/clang”
export CXX=“/opt/homebrew/Cellar/llvm@13/13.0.1_2/bin/clang++”
export PATH=“/opt/homebrew/Cellar/llvm@13/13.0.1_2/bin:$PATH”

pip install xformers==0.0.16

github 安装0.0.16

git clone -b v0.0.11 https://github.com/facebookresearch/xformers.git

git submodule update --init --recursive

export LDFLAGS=“-L/usr/local/opt/llvm@13/lib -Wl,-rpath,/usr/local/opt/llvm@13/lib”
export CPPFLAGS=“-I/usr/local/opt/llvm@13/include”
export CC=“/usr/local/opt/llvm@13/bin/clang”
export CXX=“/usr/local/opt/llvm@13/bin/clang++”

export PATH=“/usr/local/opt/llvm@12/bin:$PATH”

brew link llvm@13

export PATH=“/usr/local/opt/llvm@12/bin:$PATH”
export LDFLAGS=“-L/usr/local/opt/llvm@12/lib -Wl,-rpath,/usr/local/opt/llvm@12/lib”
export CPPFLAGS=“-I/usr/local/opt/llvm@12/include”
export CC=“/usr/local/opt/llvm@12/bin/clang”
export CXX=“/usr/local/opt/llvm@12/bin/clang++”

pythonCopy code
import sys
sys.path.append('/path/to/xformers')
source /path/to/your/venv/bin/activate

在stable difussion中如何引用lora模型

https://zhuanlan.zhihu.com/p/621380618

在 Stable Diffusion 中使用 LoRA(Low-Rank Adaptation)模型通常需要以下几个步骤:

  1. 确保环境准备

    • 确保你已经安装了 Stable Diffusion 的环境,包括相关的依赖库和框架(如 PyTorch、Transformers 等)。
  2. 下载 LoRA 模型

    • 从相应的资源网站(如 Hugging Face Model Hub 或其他地方)下载你需要的 LoRA 权重文件。
  3. 加载模型

    • 使用适当的代码加载 Stable Diffusion 模型以及 LoRA 权重。以下是一个示例代码片段,展示如何在 Python 中实现这一过程:
    from diffusers import StableDiffusionPipeline, AutoencoderKL
    from peft import PeftModel
    
    # 加载基础模型
    base_model = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
    
    # 加载 LoRA 模型
    lora_model = PeftModel.from_pretrained(base_model, "path/to/lora/model")
    
    # 将 LoRA 模型应用到基础模型
    stable_diffusion_lora = lora_model(base_model)
    
    # 生成图像
    images = stable_diffusion_lora("你的提示词", num_inference_steps=50)
  4. 设置参数

    • 根据需要调整生成图像时的参数,例如 num_inference_stepsguidance_scale 等,以获得最佳效果。
  5. 运行生成

    • 通过调用生成函数,使用你的提示词进行图像生成。

请根据具体的 LoRA 实现和版本调整代码示例。如果你使用的是特定的框架或工具(例如 diffusers),请参考其文档获取详细信息。

/opt/homebrew/opt/llvm/bin/clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /Users/shengyang1/opt/anaconda3/envs/

gfpgan/include -fPIC -O2 -isystem /Users/shengyang1/opt/anaconda3/envs/gfpgan/include -I/opt/homebrew/opt/llvm/include -I/private/var/folders/rq/fxmg4qxs2c3dcsc0bg9n71b80000gn

/T/pip-install-fi_qw4m6/xformers_000e06893b784419be96fe264710c0dc/xformers/csrc -I/Users/shengyang1/opt/anaconda3/envs/gfpgan/lib/python3.10/site-packages/torch/include -I/Use

rs/shengyang1/opt/anaconda3/envs/gfpgan/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/Users/shengyang1/opt/anaconda3/envs/gfpgan/lib/python3.10/site-pac

kages/torch/include/TH -I/Users/shengyang1/opt/anaconda3/envs/gfpgan/lib/python3.10/site-packages/torch/include/THC -I/Users/shengyang1/opt/anaconda3/envs/gfpgan/include/pytho

n3.10 -c xformers/csrc/attention/attention.cpp -o build/temp.macosx-10.9-x86_64-cpython-310/xformers/csrc/attention/attention.o -O3 -std=c++11 -fopenmp -DTORCH_API_INCLUDE_EXT

ENSION_H -DPYBIND11_COMPILER_TYPE="_clang" -DPYBIND11_STDLIB="_libcpp" -DPYBIND11_BUILD_ABI="_cxxabi1002" -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0

export LDFLAGS=“-L/opt/homebrew/opt/llvm/lib -Wl,-rpath,/opt/homebrew/opt/llvm/lib”
export CPPFLAGS=“-I/opt/homebrew/opt/llvm/include”
export CC=“/opt/homebrew/opt/llvm/bin/clang”
export CXX=“/opt/homebrew/opt/llvm/bin/clang++”

/opt/homebrew/Cellar/llvm/19.1.0/bin/…/include/c++/v1/math.h:402:63: error: use of undeclared identifier ‘FP_SUBNORMAL’

402 | return __builtin_fpclassify(FP_NAN, FP_INFINITE, FP_NORMAL, FP_SUBNORMAL, FP_ZERO, __x);

/opt/homebrew/opt/llvm/bin

bing回答的
export LDFLAGS=“-L/usr/local/opt/llvm/lib -Wl,-rpath,/usr/local/opt/llvm/lib”
export CPPFLAGS=“-I/usr/local/opt/llvm/include”
export CC=“/usr/local/opt/llvm/bin/clang”
export CXX=“/usr/local/opt/llvm/bin/clang++”

export PATH="/usr/local/opt/gcc@9/bin:$PATH"

网上找的配置的
export LDFLAGS=“-L/usr/local/opt/llvm/lib -Wl,-rpath,/usr/local/opt/llvm/lib”
export CPPFLAGS=“-I/usr/local/opt/llvm/include”

export CC=“/usr/local/opt/llvm/bin/clang”
export CXX=“/usr/local/opt/llvm/bin/clang++”