珍惜SD吧,创始团队陆陆续续都走了,传闻可能要被收购了😂
牛😧
rocm-windows也支持gfx1100,1101,1102,有办法让780m能在windows上用rocm麻,
仿照上面貌似没有用
@echo off
SET HSA_OVERRIDE_GFX_VERSION=11.0.0
start "" "C:\Users\M1175\AppData\Local\LM-Studio\LM Studio.exe"
rocm-windows也支持gfx1100,1101,1102,有办法让780m能在windows上用rocm麻,
仿照上面貌似没有用
@echo off
SET HSA_OVERRIDE_GFX_VERSION=11.0.0
start "" "C:\Users\M1175\AppData\Local\LM-Studio\LM Studio.exe"
windows版不能用环境变量的方法
Windows Rocm:HSA_OVERRIDE_GFX_VERSION不起作用 ·问题 #3107 ·奥拉玛/奥拉玛 (github.com)
windows版不能用环境变量的方法
Windows Rocm:HSA_OVERRIDE_GFX_VERSION不起作用 ·问题 #3107 ·奥拉玛/奥拉玛 (github.com)
按issue里的替换library和rocblas也没成功,不过在ollama上面使用rocm成功了。linux那边没想到rocm要占用30个G,安装直接把系统整崩溃了😭
按issue里的替换library和rocblas也没成功,不过在ollama上面使用rocm成功了。linux那边没想到rocm要占用30个G,安装直接把系统整崩溃了😭
确实,之前我参考amd文档里用docker.好家伙装完吃掉我67GB的硬盘
ERROR:bitsandbytes.cextension:Could not load bitsandbytes native library: 'NoneType' object has no attribute 'split'
Traceback (most recent call last):
File "/home/caoyuUU/.pyenv/versions/3.10.6/lib/python3.10/site-packages/bitsandbytes/cextension.py", line 109, in
lib = get_native_library()
File "/home/caoyuUU/.pyenv/versions/3.10.6/lib/python3.10/site-packages/bitsandbytes/cextension.py", line 88, in get_native_library
cuda_specs = get_cuda_specs()
File "/home/caoyuUU/.pyenv/versions/3.10.6/lib/python3.10/site-packages/bitsandbytes/cuda_specs.py", line 39, in get_cuda_specs
cuda_version_string=(get_cuda_version_string()),
File "/home/caoyuUU/.pyenv/versions/3.10.6/lib/python3.10/site-packages/bitsandbytes/cuda_specs.py", line 29, in get_cuda_version_string
major, minor = get_cuda_version_tuple()
File "/home/caoyuUU/.pyenv/versions/3.10.6/lib/python3.10/site-packages/bitsandbytes/cuda_specs.py", line 24, in get_cuda_version_tuple
major, minor = map(int, torch.version.cuda.split("."))
AttributeError: 'NoneType' object has no attribute 'split'
WARNING:bitsandbytes.cextension:
CUDA Setup failed despite CUDA being available. Please run the following command to get more information:
python -m bitsandbytes
Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues
2024-05-27 21:54:14.420272: I external/local_tsl/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2024-05-27 21:54:14.422645: I external/local_tsl/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2024-05-27 21:54:14.453005: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-05-27 21:54:15.112230: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Traceback (most recent call last):
File "/home/caoyuUU/stable-diffusion-webui/launch.py", line 48, in
main()
File "/home/caoyuUU/stable-diffusion-webui/launch.py", line 44, in main
start()
File "/home/caoyuUU/stable-diffusion-webui/modules/launch_utils.py", line 465, in start
import webui
File "/home/caoyuUU/stable-diffusion-webui/webui.py", line 13, in
initialize.imports()
File "/home/caoyuUU/stable-diffusion-webui/modules/initialize.py", line 39, in imports
from modules import processing, gradio_extensons, ui # noqa: F401
File "/home/caoyuUU/stable-diffusion-webui/modules/processing.py", line 14, in
import cv2
File "/home/caoyuUU/.pyenv/versions/3.10.6/lib/python3.10/site-packages/cv2/init.py", line 181, in
bootstrap()
File "/home/caoyuUU/.pyenv/versions/3.10.6/lib/python3.10/site-packages/cv2/init.py", line 153, in bootstrap
native_module = importlib.import_module("cv2")
File "/home/caoyuUU/.pyenv/versions/3.10.6/lib/python3.10/importlib/init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
ImportError: libpng16-7379b3c3.so.16.40.0: cannot open shared object file: No such file or directory
报这个错是啥原因?
ERROR:bitsandbytes.cextension:Could not load bitsandbytes native library: 'NoneType' object has no attribute 'split'
Traceback (most recent call last):
File "/home/caoyuUU/.pyenv/versions/3.10.6/lib/python3.10/site-packages/bitsandbytes/cextension.py", line 109, in
lib = get_native_library()
File "/home/caoyuUU/.pyenv/versions/3.10.6/lib/python3.10/site-packages/bitsandbytes/cextension.py", line 88, in get_native_library
cuda_specs = get_cuda_specs()
File "/home/caoyuUU/.pyenv/versions/3.10.6/lib/python3.10/site-packages/bitsandbytes/cuda_specs.py", line 39, in get_cuda_specs
cuda_version_string=(get_cuda_version_string()),
File "/home/caoyuUU/.pyenv/versions/3.10.6/lib/python3.10/site-packages/bitsandbytes/cuda_specs.py", line 29, in get_cuda_version_string
major, minor = get_cuda_version_tuple()
File "/home/caoyuUU/.pyenv/versions/3.10.6/lib/python3.10/site-packages/bitsandbytes/cuda_specs.py", line 24, in get_cuda_version_tuple
major, minor = map(int, torch.version.cuda.split("."))
AttributeError: 'NoneType' object has no attribute 'split'
WARNING:bitsandbytes.cextension:
CUDA Setup failed despite CUDA being available. Please run the following command to get more information:
python -m bitsandbytes
Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues
2024-05-27 21:54:14.420272: I external/local_tsl/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2024-05-27 21:54:14.422645: I external/local_tsl/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2024-05-27 21:54:14.453005: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-05-27 21:54:15.112230: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Traceback (most recent call last):
File "/home/caoyuUU/stable-diffusion-webui/launch.py", line 48, in
main()
File "/home/caoyuUU/stable-diffusion-webui/launch.py", line 44, in main
start()
File "/home/caoyuUU/stable-diffusion-webui/modules/launch_utils.py", line 465, in start
import webui
File "/home/caoyuUU/stable-diffusion-webui/webui.py", line 13, in
initialize.imports()
File "/home/caoyuUU/stable-diffusion-webui/modules/initialize.py", line 39, in imports
from modules import processing, gradio_extensons, ui # noqa: F401
File "/home/caoyuUU/stable-diffusion-webui/modules/processing.py", line 14, in
import cv2
File "/home/caoyuUU/.pyenv/versions/3.10.6/lib/python3.10/site-packages/cv2/init.py", line 181, in
bootstrap()
File "/home/caoyuUU/.pyenv/versions/3.10.6/lib/python3.10/site-packages/cv2/init.py", line 153, in bootstrap
native_module = importlib.import_module("cv2")
File "/home/caoyuUU/.pyenv/versions/3.10.6/lib/python3.10/importlib/init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
ImportError: libpng16-7379b3c3.so.16.40.0: cannot open shared object file: No such file or directory
报这个错是啥原因?
AMD 6600XT
ERROR:bitsandbytes.cextension:Could not load bitsandbytes native library: 'NoneType' object has no attribute 'split'
Traceback (most recent call last):
File "/home/caoyuUU/.pyenv/versions/3.10.6/lib/python3.10/site-packages/bitsandbytes/cextension.py", line 109, in
lib = get_native_library()
File "/home/caoyuUU/.pyenv/versions/3.10.6/lib/python3.10/site-packages/bitsandbytes/cextension.py", line 88, in get_native_library
cuda_specs = get_cuda_specs()
File "/home/caoyuUU/.pyenv/versions/3.10.6/lib/python3.10/site-packages/bitsandbytes/cuda_specs.py", line 39, in get_cuda_specs
cuda_version_string=(get_cuda_version_string()),
File "/home/caoyuUU/.pyenv/versions/3.10.6/lib/python3.10/site-packages/bitsandbytes/cuda_specs.py", line 29, in get_cuda_version_string
major, minor = get_cuda_version_tuple()
File "/home/caoyuUU/.pyenv/versions/3.10.6/lib/python3.10/site-packages/bitsandbytes/cuda_specs.py", line 24, in get_cuda_version_tuple
major, minor = map(int, torch.version.cuda.split("."))
AttributeError: 'NoneType' object has no attribute 'split'
WARNING:bitsandbytes.cextension:
CUDA Setup failed despite CUDA being available. Please run the following command to get more information:
python -m bitsandbytes
Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues
2024-05-27 21:54:14.420272: I external/local_tsl/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2024-05-27 21:54:14.422645: I external/local_tsl/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2024-05-27 21:54:14.453005: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-05-27 21:54:15.112230: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Traceback (most recent call last):
File "/home/caoyuUU/stable-diffusion-webui/launch.py", line 48, in
main()
File "/home/caoyuUU/stable-diffusion-webui/launch.py", line 44, in main
start()
File "/home/caoyuUU/stable-diffusion-webui/modules/launch_utils.py", line 465, in start
import webui
File "/home/caoyuUU/stable-diffusion-webui/webui.py", line 13, in
initialize.imports()
File "/home/caoyuUU/stable-diffusion-webui/modules/initialize.py", line 39, in imports
from modules import processing, gradio_extensons, ui # noqa: F401
File "/home/caoyuUU/stable-diffusion-webui/modules/processing.py", line 14, in
import cv2
File "/home/caoyuUU/.pyenv/versions/3.10.6/lib/python3.10/site-packages/cv2/init.py", line 181, in
bootstrap()
File "/home/caoyuUU/.pyenv/versions/3.10.6/lib/python3.10/site-packages/cv2/init.py", line 153, in bootstrap
native_module = importlib.import_module("cv2")
File "/home/caoyuUU/.pyenv/versions/3.10.6/lib/python3.10/importlib/init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
ImportError: libpng16-7379b3c3.so.16.40.0: cannot open shared object file: No such file or directory
报这个错是啥原因?
环境没配好
https://rocm.docs.amd.com/projects/radeon/en/latest/docs/install/install-pytorch.html
按后面步骤检查
环境没配好
https://rocm.docs.amd.com/projects/radeon/en/latest/docs/install/install-pytorch.html
按后面步骤检查
谢谢老哥,可以了
rocBLAS error: Cannot read /home/jie/anaconda3/envs/rocm6/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary.dat: Illegal seek for GPU arch : gfx1103
List of available TensileLibrary Files :
"/home/jie/anaconda3/envs/rocm6/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary_lazy_gfx1100.dat"
"/home/jie/anaconda3/envs/rocm6/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary_lazy_gfx906.dat"
"/home/jie/anaconda3/envs/rocm6/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary_lazy_gfx900.dat"
"/home/jie/anaconda3/envs/rocm6/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary_lazy_gfx90a.dat"
"/home/jie/anaconda3/envs/rocm6/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary_lazy_gfx1030.dat"
"/home/jie/anaconda3/envs/rocm6/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary_lazy_gfx942.dat"
"/home/jie/anaconda3/envs/rocm6/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary_lazy_gfx908.dat"
使用的linux,pytorch,gpu也是780m,torch.cuda.is_avaliable()显示true,但是运行就显示这个错误,环境变量我也在.bashrc里设置了
这是为什么呀
rocBLAS error: Cannot read /home/jie/anaconda3/envs/rocm6/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary.dat: Illegal seek for GPU arch : gfx1103
List of available TensileLibrary Files :
"/home/jie/anaconda3/envs/rocm6/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary_lazy_gfx1100.dat"
"/home/jie/anaconda3/envs/rocm6/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary_lazy_gfx906.dat"
"/home/jie/anaconda3/envs/rocm6/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary_lazy_gfx900.dat"
"/home/jie/anaconda3/envs/rocm6/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary_lazy_gfx90a.dat"
"/home/jie/anaconda3/envs/rocm6/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary_lazy_gfx1030.dat"
"/home/jie/anaconda3/envs/rocm6/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary_lazy_gfx942.dat"
"/home/jie/anaconda3/envs/rocm6/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary_lazy_gfx908.dat"
使用的linux,pytorch,gpu也是780m,torch.cuda.is_avaliable()显示true,但是运行就显示这个错误,环境变量我也在.bashrc里设置了
这是为什么呀
export HSA_OVERRIDE_GFX_VERSION=11.0.0
设置了这个吗,必须是11.0.0
export HSA_OVERRIDE_GFX_VERSION=11.0.0
设置了这个吗,必须是11.0.0
设置了,在.bashrc 文件的最后一行里添加了
大神,我的笔记本CPU R9-7940hs,没有独显,核显是Radeon 780M,系统Deepin V23最新稳定版本。按照你的教程操作:
- sudo apt install ./amdgpu-install_6.1.60103-1_all.deb
- python3.10.6
驱动和ROCm按照你给链接都按照好了,也都安装成功了,rocm-smi和rocminfo输出的信息都是对的。
stable diffusion依赖的包、模型都正确安装了,先 export HSA_OVERRIDE_GFX_VERSION=11.0.0,
然后 python launch.py --precision full --no-half,执行报错如下:
执行 python launch.py --precision full --no-half --skip-torch-cuda-test,stable diffusion可以运行了,也能正常文本生成图像,但是使用的却是CPU,而CPU的核显并没有被调用起来。按照教程安装的rocm5.7,会不会是rocm5.7版本太低了的缘故呢?
我发现按照你的教程安装的pytorch:pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.7,当然这个也可以使用CPU,而我查阅资料,发现AMD 7000系列显卡部署Stable Diffusion(Ubuntu 22.04):https://blog.csdn.net/dr_chenseu/article/details/138233189,这篇文章使用的pytorch针对ROCm定制的,这个pytorch是CPU版本的。
大家能帮我看看我应该如何操作才能让stable diffusion把我的CPU核显Radeon 780M调用起来呢?谢谢了!
rocBLAS error: Cannot read /home/jie/anaconda3/envs/rocm6/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary.dat: Illegal seek for GPU arch : gfx1103
List of available TensileLibrary Files :
"/home/jie/anaconda3/envs/rocm6/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary_lazy_gfx1100.dat"
"/home/jie/anaconda3/envs/rocm6/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary_lazy_gfx906.dat"
"/home/jie/anaconda3/envs/rocm6/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary_lazy_gfx900.dat"
"/home/jie/anaconda3/envs/rocm6/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary_lazy_gfx90a.dat"
"/home/jie/anaconda3/envs/rocm6/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary_lazy_gfx1030.dat"
"/home/jie/anaconda3/envs/rocm6/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary_lazy_gfx942.dat"
"/home/jie/anaconda3/envs/rocm6/lib/python3.10/site-packages/torch/lib/rocblas/library/TensileLibrary_lazy_gfx908.dat"
使用的linux,pytorch,gpu也是780m,torch.cuda.is_avaliable()显示true,但是运行就显示这个错误,环境变量我也在.bashrc里设置了
这是为什么呀
我的CPU是R9-7940hs,核显也是780M,但是torch.cuda.is_avaliable()显示false,SD作画时无法调用核显。
我的CPU是R9-7940hs,核显也是780M,但是torch.cuda.is_avaliable()显示false,SD作画时无法调用核显。
export HSA_OVERRIDE_GFX_VERSION=11.0.0
这个报错应该是环境变量没设置,加上这个试试。
rocm-windows也支持gfx1100,1101,1102,有办法让780m能在windows上用rocm麻,
仿照上面貌似没有用
@echo off
SET HSA_OVERRIDE_GFX_VERSION=11.0.0
start "" "C:\Users\M1175\AppData\Local\LM-Studio\LM Studio.exe"
windows 我使用directx ML成功在R9-7940hs的核显上成功运行了stable diffusion,但是我没有尝试在windows上使用ROCm,但是我查阅了一些信息发现在windows上使用AMD的HIP实现ROCm。
export HSA_OVERRIDE_GFX_VERSION=11.0.0
这个报错应该是环境变量没设置,加上这个试试。
老哥,还是不行啊。
老哥,还是不行啊。
玄学emm
请问为什么需要 --no-half
呢,我不加这个的确会报错,但理由是什么
Popular Events
More
先看效果图
搭建教程:
一、安装AMDGPU 驱动
参考之前的帖子:https://bbs.deepin.org.cn/post/271384
二、安装pyenv并创建python3.10.6环境
参考之前的帖子:https://bbs.deepin.org.cn/post/267350
三、安装Stable Diffusion和配置环境