本帖最后由 kszyd 于 2025-1-17 12:36 编辑
之前发了一篇用photoprism搭建相册管理的帖子,里面含了基础环境安装,详细请见:树莓派4B打造家用NAS备份手机照片(把大象装进冰箱系列1)。
但由于树莓派本身配置低,导致photoprism实际应用体验不好。本次尝试immich(immich项目地址),体验整体要好很多。基础环境仍然是树莓派4B,装了OMV,安装了OMV-Compose插件,通过此插件将immich镜像部署上。以下是安装过程:
1、从compose插件-文件-新建,添加应用镜像;1)复制/导入docker-compose.yml文件内容,2)复制/导入.env文件内容;
2、以下是docker-compose.yml原文内容,需要做如下更改:1)将两处ghcr.io替换成ghcr.nju.edu.cn国内源,将两处docker.io替换成docker.m.daocloud.io国内源;
2)在- ${UPLOAD_LOCATION}:/usr/src/app/upload 这行下面新增一行外部图库路径,可用于已有图库的数据加载。(真实图库路径,本人图片存放在以下路径)- /mnt/disk1/old:/media/disk2/new:ro (此路径是系统真实路径,后续添加外部图库时需要输入此路径,ro表示只读)。
3)替换 - model-cache:/cache 此行为本地大模型加载映射路径;如本人把git下来的大模型数据放在了./mnt/model-cache下面,完整代码就是:- ./mnt/model-cache:/cache #默认加载模型数据是从国外镜像站加载会很慢,可从国内镜像站git到本地,复制到此目录下。大模型涉及两类一是中文搜索二是人脸识别,系统默认模型不支持中文语义搜索图片,可用XLM-Roberta-Large-Vit-B-16Plus替换,人脸识别模型有buffalo_l(系统默认)和antelopev2(这个模型大家都说好),将下载或git下来的数据分别存放在/mnt/model-cache/clip/XLM-Roberta-Large-Vit-B-16Plus和/mnt/model-cache/facial-recognition/buffalo_l或antelopev2模型数据,clip和facial-recognition这两个文件夹需自己创建,这一步必须有!系统加载模型数据的时候会自动从这两个文件夹下查找。
4)将最后volumes: model-cache: 这两行删除。
docker-compose.yml原文内容:
- #
- # WARNING: Make sure to use the docker-compose.yml of the current release:
- #
- # https://github.com/immich-app/immich/releases/latest/download/docker-compose.yml
- #
- # The compose file on main may not be compatible with the latest release.
- #
- name: immich
- services:
- immich-server:
- container_name: immich_server
- image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
- # extends:
- # file: hwaccel.transcoding.yml
- # service: cpu # set to one of [nvenc, quicksync, rkmpp, vaapi, vaapi-wsl] for accelerated transcoding
- volumes:
- # Do not edit the next line. If you want to change the media storage location on your system, edit the value of UPLOAD_LOCATION in the .env file
- - ${UPLOAD_LOCATION}:/usr/src/app/upload
- - /etc/localtime:/etc/localtime:ro
- env_file:
- - .env
- ports:
- - '2283:2283'
- depends_on:
- - redis
- - database
- restart: always
- healthcheck:
- disable: false
- immich-machine-learning:
- container_name: immich_machine_learning
- # For hardware acceleration, add one of -[armnn, cuda, openvino] to the image tag.
- # Example tag: ${IMMICH_VERSION:-release}-cuda
- image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}
- # extends: # uncomment this section for hardware acceleration - see https://immich.app/docs/features/ml-hardware-acceleration
- # file: hwaccel.ml.yml
- # service: cpu # set to one of [armnn, cuda, openvino, openvino-wsl] for accelerated inference - use the `-wsl` version for WSL2 where applicable
- volumes:
- - model-cache:/cache
- env_file:
- - .env
- restart: always
- healthcheck:
- disable: false
- redis:
- container_name: immich_redis
- image: docker.io/redis:6.2-alpine@sha256:905c4ee67b8e0aa955331960d2aa745781e6bd89afc44a8584bfd13bc890f0ae
- healthcheck:
- test: redis-cli ping || exit 1
- restart: always
- database:
- container_name: immich_postgres
- image: docker.io/tensorchord/pgvecto-rs:pg14-v0.2.0@sha256:90724186f0a3517cf6914295b5ab410db9ce23190a2d9d0b9dd6463e3fa298f0
- environment:
- POSTGRES_PASSWORD: ${DB_PASSWORD}
- POSTGRES_USER: ${DB_USERNAME}
- POSTGRES_DB: ${DB_DATABASE_NAME}
- POSTGRES_INITDB_ARGS: '--data-checksums'
- volumes:
- # Do not edit the next line. If you want to change the database storage location on your system, edit the value of DB_DATA_LOCATION in the .env file
- - ${DB_DATA_LOCATION}:/var/lib/postgresql/data
- healthcheck:
- test: >-
- pg_isready --dbname="${POSTGRES_DB}" --username="${POSTGRES_USER}" || exit 1;
- Chksum="$(psql --dbname="${POSTGRES_DB}" --username="${POSTGRES_USER}" --tuples-only --no-align
- --command='SELECT COALESCE(SUM(checksum_failures), 0) FROM pg_stat_database')";
- echo "checksum failure count is $Chksum";
- [ "$Chksum" = '0' ] || exit 1
- interval: 5m
- start_interval: 30s
- start_period: 5m
- command: >-
- postgres
- -c shared_preload_libraries=vectors.so
- -c 'search_path="$user", public, vectors'
- -c logging_collector=on
- -c max_wal_size=2GB
- -c shared_buffers=512MB
- -c wal_compression=on
- restart: always
- volumes:
- model-cache:
复制代码 3、以下是env环境文件,需做如下更改:
1)将 UPLOAD_LOCATION=./library 此行是手机上传照片所用路径,如本人改成了 UPLOAD_LOCATION=/mnt/disk3/upload
2)像数据库文件占用空间不大,可不做变动。
- # You can find documentation for all the supported env variables at https://immich.app/docs/install/environment-variables
- # The location where your uploaded files are stored
- UPLOAD_LOCATION=./library
- # The location where your database files are stored
- DB_DATA_LOCATION=./postgres
- # To set a timezone, uncomment the next line and change Etc/UTC to a TZ identifier from this list: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List
- # TZ=Etc/UTC
- # The Immich version to use. You can pin this to a specific version like "v1.71.0"
- IMMICH_VERSION=release
- # Connection secret for postgres. You should change it to a random password
- # Please use only the characters `A-Za-z0-9`, without special characters or spaces
- DB_PASSWORD=postgres
- # The values below this line do not need to be changed
- ###################################################################################
- DB_USERNAME=postgres
- DB_DATABASE_NAME=immich
复制代码 4、做完以上配置,immich应该能正常运行了。
以下是在部署过程中遇到的问题及解决办法:
1、如加载不成功,可将docker-compose.yml 内 -.env 部分改成 -immich.env,(在omv-compose下yml和env环境文件名称需一致,本人创建此应用时将此应用命名为了immich.yml)。
2、进入immich应用后地图显示英文问题,可参考如下:地图显示中文
3、反向地理编码显示英文问题,可参考如下:反向地理编码显示中文,需要认真查看此贴下面的网友回复。
4、大模型加载是否成功,要看机器配置,吃内存和cpu,如过配置低会出现进程超时错误,可从.env环境文件里加一条代码:- MACHINE_LEARNING_WORKER_TIMEOUT=600
复制代码 #把超时时间提高到10分钟试试,默认应该是120。5、运行大模型任务时可用以下命令查看运行日志:- sudo docker logs -f immich_machine_learning
复制代码 尤其是在刚部署上进行检索和人脸识别任务时一定要查看immich-machine-learning运行日志,从应用页面是看不到模型运行情况的。
5、支持多语言的CLIP模型共以下4类,由于本人设备配置低,能用中文正常检索出结果的只有ViT-B-16-SigLIP-i18n-256__webli,配置低的设备也可以选择这个试下。- XLM-Roberta-Large-ViT-H-14__frozen_laion5b_s13b_b90k
- XLM-Roberta-Large-Vit-B-16Plus
- ViT-B-16-SigLIP-i18n-256__webli
- XLM-Roberta-Large-Vit-B-32
复制代码
遗留问题:反向地理编码数据不足导致有些区域仍显示不正常,此问题还没解决,有类似经验的朋友可留言讨论!
|