I’m hosting a few services using docker. For something like an openstreetmap tileserver, I’d like it to remain on my SSD because high speed improves performance, and the directory is unlikely to grow and fill the drive.

For other services like NextCloud, speed isn’t as important as storage size, so I might want it on a larger HDD raid.

I know it’s trivial to move the volumes directory to wherever, but can I move some volumes to one directory and some volumes to another?

  • Dave@lemmy.nz
    link
    fedilink
    English
    arrow-up
    19
    ·
    1 day ago

    I don’t know if this is naughty but I use bind mounts for everything, and docker compose to keep it all together.

    You can map directories or even individual files to directories/files on the host computer.

    Normally I make a directory for the service then map all volumes inside a ./data directory or something like that. But you could easily bind to different directories. For example for photoprism I mount my photos from a data drive for it to access, mount the main data/database to a directory that gets backed up, and mount the cache to a directory that doesn’t get backed up.

    • suicidaleggroll@lemm.ee
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      1 day ago

      Same, I don’t let Docker manage volumes for anything. If I need it to be persistent I bind mount it to a subdirectory of the container itself. It makes backups so much easier as well since you can just stop all containers, backup everything in ~/docker or wherever you put all of your compose files and volumes, and then restart them all.

      It also means you can go hog wild with docker system prune -af --volumes and there’s no risk of losing any of your data.

      • Dave@lemmy.nz
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        Yes that’s what I do too!

        Overnight cron to stop containers, run borgmatic, then start the containers again.

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 day ago

      I’ve never not used bind mounts. That data is persistent. Nonpersistent data is fine on docker volumes.

      • Dave@lemmy.nz
        link
        fedilink
        English
        arrow-up
        14
        ·
        1 day ago

        Docker wants you to use volumes. That data is persistent too. They say volumes are much easier to backup. I disagree, I much prefer the bind mounts, especially when it comes to selective backups.

        • KaninchenSpeed@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 day ago

          Volumes are horrible, how would I easily edit a config file of the programm running inside, if the container deosnt even start.

          Bind mounts + ZFS datasets are the way to go.

    • towerful@programming.dev
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 day ago

      I do that, until some container has permissions issues.
      I tinker, try and fix it, give up and use a volume. Or I fix it, but it never seems to be the same fix

      • Dave@lemmy.nz
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 day ago

        I occasionally have had permissions issues but I tend to be able to fix them. Normally it’s just a matter of deleting the files on the host and letting the container create them, though it doesn’t always work it usually does.

  • Matt The Horwood@lemmy.horwood.cloud
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 day ago

    If you use a volume, you can mount that anywhere.

    volumes:
      lemmy_pgsql:
        driver: local
        driver_opts:
          type: none
          o: bind
          device: '/mnt/data/lemmy/pgsql'
    

    Then in your service add a volume

        volumes:
          - lemmy_pgsql:/var/lib/postgresql/data:Z
    
    • ikidd@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      22 hours ago

      Is there any advantage to bind mounting that way? I’ve only ever done it by specifying the path directly in the container, usually ./data:data or some such. Never had a problem with it.

        • ikidd@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          18 hours ago

          Well, I know you can define volumes for other filesystem drivers, but with bind mounts, you don’t need to define the bind mount as you do, you can just specify the path directly in the container volumes and it will bind mount it. I was just wondering if there was any actual benefit to defining the volume manually over the simple way.

          • Matt The Horwood@lemmy.horwood.cloud
            link
            fedilink
            English
            arrow-up
            3
            ·
            16 hours ago

            In my case I need to use a named volume for docker swarm, also I can reuse a named volume in other services. If your not using swarm then just a bind mount should be fine

            • ikidd@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              16 hours ago

              OK, yah, that’s good point about swarms. I’ve generally not used any swarmed filesystem stuff where I needed persistence, just shared databases, so it hasn’t come up.

  • e0qdk@reddthat.com
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 days ago

    You can run docker containers with multiple volumes. e.g. pass something like -v src1:dst1 -v src2:dst2 as arguments to docker run.

    So – if I understood your question correctly – yes, you can do that.

  • astrsk@fedia.io
    link
    fedilink
    arrow-up
    3
    arrow-down
    2
    ·
    1 day ago

    This is mostly an IOPS dependent answer. Do you have multiple hot services constantly hitting the disk? If so, it can be advantageous to split the heavy hitters across different disk controllers, so in high redundancy situations that means different dedicated pools. If it’s a bunch of services just reading, filesystems like ZFS use caching to almost completely eliminate disk thrashing.