Skip to content

Scrutiny SMART Monitoring in Container Manager on a Synology NAS

Important or Recent Updates
Historic UpdatesDate
Updated to use Container Manager and Projects29/05/2023
Added additional security option to the compose to restrict the container from gaining new privileges25/10/2023
Remove the need for setting up the synobridge network and allow the container to sit on its own isolated bridge14/07/2024
There may be an issue with the CRON kicking in on new installations meaning the UI doesn’t update – to remedy this I have added an override to the compose22/07/2024
Added note to database as minimum password length is 8 characters.
Added example cron schedules for running more that once per day
Added FAQ item relating to empty dashboards
08/11/2024
Historic Updates


What is Scrutiny?

Scrutiny is a Hard Drive Health Dashboard & Monitoring solution, merging manufacturer provided S.M.A.R.T metrics with real-world failure rates.

Let’s Begin

In this guide I will take you through the steps to get Scrutiny up and running in Docker.

Getting our drive details

We need to get some details about our drives in order for Scrutiny to read their SMART data.

It’s time to get logged into your Diskstation via SSH, in this guide I am using Windows Terminal however the steps will be similar on Mac and Linux,

Head into the DSM Control Panel > Terminal & SNMP and then enable SSH service.

Open up ‘‘Terminal’

Now type ‘ssh’ then your main admin account username @ your NAS IP Address and hit Enter

Bash
ssh drfrankenstein@192.168.0.101

You will then be asked to enter the password for the user you used you can either type this or right click in the window to paste (you won’t see it paste the info) then press enter.

Enter the login information for your main Synology user account, you will not be able to see the password as you type it. (If you are using a password manager right-clicking in the window will paste – you won’t be able to see it)

Now we are logged in we just need to do a single command to see our drives, note I am not prefacing this command with sudo as we don’t need the low level detail. You will see permission denied errors, but these can be ignored.

Bash
fdisk -l

The output you will see depends on the model NAS you own, the two examples below are from an 1821+ and an 1815+ which have 8 bays and the 1821+ has up to 2 NVMEs.

The 1815+ has 8 drives broken down from sda to sdh

The 1821+ has 8 drives broken down into SATA and NVME devices, sata1to sata8with the nvme0n1and nvme1n1. (Note if you have any eSATA devices connected these will also show)

Make note of the devices you see in your output as we will need them for the config file and compose.

USB Drives

If you also want to add USB drives this will depend on whether the Manufacturer of the caddy passes this info on. I have commented out the USB extra parts in the config a bit further on.

Config Files and Folders

Next lets create the folders the container will need. Head into File Station and create a subfolder in the ‘docker’ share called ‘scrutiny’ and then within that another called ‘influxdb’ it should look like the below.

Then if you don’t have one already from my other guides create another folder in the ‘docker’ share called ‘projects’ and within that another one called ‘scrutiny’

Next comes the config files, You can edit this file in a number of ways, but to keep the guide OS-agnostic we will be using the Synology Text Editor package which can be installed via Package Center.

Open up a new text document and paste one of the two code snippets below into it. Use the one that matches up with the way your drives are shown in the previous step (if you come across anything different let me know in the comments!)

Type 1

YAML
version: 1
host:
  id: ""
devices:
  - device: /dev/sata1
    type: 'sat'
  - device: /dev/sata2
    type: 'sat'
  - device: /dev/sata3
    type: 'sat'
  - device: /dev/sata4
    type: 'sat'
  - device: /dev/sata5
    type: 'sat'
  - device: /dev/sata6
    type: 'sat'
  - device: /dev/sata7
    type: 'sat'
  - device: /dev/sata8
    type: 'sat'
  - device: /dev/nvme0n1
    type: 'nvme'
  - device: /dev/nvme1n1
    type: 'nvme'
#  - device: /dev/usb1
#    type: 'sat'
#  - device: /dev/usb2
#    type: 'sat'

Type 2

YAML
version: 1
host:
  id: ""
devices:
  - device: /dev/sda
    type: 'sat'
  - device: /dev/sdb
    type: 'sat'
  - device: /dev/sdc
    type: 'sat'
  - device: /dev/sdd
    type: 'sat'
  - device: /dev/sde
    type: 'sat'
  - device: /dev/sdf
    type: 'sat'
  - device: /dev/sdg
    type: 'sat'
  - device: /dev/sdh
    type: 'sat'
  - device: /dev/nvme0n1
    type: 'nvme'
  - device: /dev/nvme1n1
    type: 'nvme'
#  - device: /dev/usb1
#    type: 'sat'
#  - device: /dev/usb2
#    type: 'sat'

You will need to edit the config file in line with the number of drives you had in the output earlier either adding or removing lines accordingly, including adding or removing the NVME drives.

Also, I have included a couple of commented out lines for USB drives if you have them connected.

Next you can save this file as ‘collector.yaml’ in the ‘/docker/scrutiny’ folder.

Notifications Config (optional)

This step is optional and depends on if you want to set up some notifications in case one of your drive has issues.

As of writing there are 14 different notification method, as you can imagine I cannot cover every single type in this guide, but this will get the config file in place for you to amend based on your preferences

Open up a new file Text Editor again, this time you need to copy and paste the full contents of the example config file located here

Scroll to the bottom of the file where you will see a number of config options for notifications. You will need to the remove the # from the ‘notify’ and ‘urls’ lines and then depending on which type of notification you decide to set up the # will need to be removed from the corresponding line.

The level of notification you receive (Critical or All Issues) can be set up in the WebUI once Scrutiny is up and running.

Finally, save this file as ‘scrutiny.yaml’ into the /docker/scrutiny folder.

Docker Compose

We will be using Docker Compose in the Projects section of Container Manager to set up the container.

Open up Container Manager and click on Project then on the right-hand side click ‘Create’.

In the next screen we will set up our General Settings ‘Project Name’ will be ‘scrutiny’ the ‘Path’ click the button and select the folder we created earlier in ‘/docker/projects/scrutiny’. ‘Source:’ change the drop-down to ‘Create docker-compose.yml’.

Next we are going to drop in our docker compose configuration copy all the code in the box below and paste it into line ‘1’ just like the screenshot.

YAML
services:
  scrutiny:
    container_name: scrutiny
    image: ghcr.io/analogj/scrutiny:master-omnibus
    cap_add:
      - SYS_RAWIO
      - SYS_ADMIN
    ports:
      - 6090:8080/tcp # webapp
      - 8086:8086/tcp # influxDB admin
    volumes:
      - /run/udev:/run/udev:ro
      - /volume1/docker/scrutiny:/opt/scrutiny/config
      - /volume1/docker/scrutiny/influxdb:/opt/scrutiny/influxdb
    devices:
      - /dev/nvme0n1:/dev/nvme0n1
      - /dev/nvme1n1:/dev/nvme1n1
      - /dev/sata1:/dev/sata1
      - /dev/sata2:/dev/sata2
      - /dev/sata3:/dev/sata3
      - /dev/sata4:/dev/sata4
      - /dev/sata5:/dev/sata5
      - /dev/sata6:/dev/sata6
      - /dev/sata7:/dev/sata7
      - /dev/sata8:/dev/sata8
#      - /dev/usb1:/dev/usb1
#      - /dev/usb2:/dev/usb2
    environment:
      - SCRUTINY_WEB_INFLUXDB_TOKEN=ANYLONGSTRING
      - SCRUTINY_WEB_INFLUXDB_INIT_USERNAME=A-USERNAME
      - SCRUTINY_WEB_INFLUXDB_INIT_PASSWORD=A-PASSWORD
      - COLLECTOR_CRON_SCHEDULE=0 23 * * *
    security_opt:
      - no-new-privileges:true
    restart: unless-stopped


As you can see the devices section contains all our drives, you will need to amend this again in line with the config file you created earlier. You will need to amend the paths each side of the : so they match, adding or removing drives accordingly including the NVMEs.

e.g., /dev/sata1:/dev/sata1 or /dev/sda:/dev/sda and so on.

In addition to this you will see in the ‘environment’ section three variables that will need to be updated as outlined below, these secure the database used by scrutiny.

VariableValue
SCRUTINY_WEB_INFLUXDB_TOKENenter a sting of characters you can use almost anything treat it like a password so a nice long string
SCRUTINY_WEB_INFLUXDB_INIT_USERNAMEThis can be anything you like
SCRUTINY_WEB_INFLUXDB_INIT_PASSWORDa secure password (min of 8 characters)
COLLECTOR_CRON_SCHEDULE0 23 * * *

This overrides the default cron schedule at midnight and runs at 23:00

You can change the schedule to run more than once per day by using https://crontab.cronhub.io/ to get the right code to include. For example 0 * * * * is hourly.

These 3 values are only required for the first ever setup – you can remove them once Scrutiny is up and running but keep them safe in case you ever need them. Maybe in Vaultwarden!

Once you have made the edits press ‘Next’

You do not need to enable anything on the ‘Web portal settings’ screen click ‘Next’ again.

On the final screen click ‘Done’ which will begin the download of the container image and once downloaded they will be launched!

You will now see Scrutiny running and should have a green status on the left-hand side.

You should now be able to access the Scrutiny WebUI by going to your NAS IP followed by port 6090

e.g., 192.168.0.30:6090

Sometimes it can take a few minutes before all your drives appear, as Scrutiny needs to obtain their information so don’t panic if it’s initially empty. You can now adjust settings for the UI and Notifications in the WebUI.

FAQ (Frequently Asked Questions)

I added extra drives to my config, and they don’t appear in the UI even after waiting

You can overcome this by stopping the overall Project and then rebuilding it via Action > Build in the Container Manager UI

My dashboard is empty and shows no drives

If you have waited until after the initial scan and still no drives appear then you can try triggering a manual scan via SSH

Bash
sudo docker exec scrutiny /opt/scrutiny/bin/scrutiny-collector-metrics run


Looking for some help, join our Discord community

If you are struggling with any steps in the guides or looking to branch out into other containers join our Discord community!

Buy me a beverage!

If you have found my site useful please consider pinging me a tip as it helps cover the cost of running things or just lets me stay hydrated. Plus 10% goes to the devs of the apps I do guides for every year.

Published inDockerOther Tools 7.2Synology

97 Comments

  1. BelgianMonster BelgianMonster

    It’s stuck for a long time after container spotweb_db waiting, then it goes into error dependency failed to start: container for service “spotweb_db” is unhealty exit code 1.

    screenshot: https://i.imgur.com/g7ZlOgS.png

    I tried joining the discord invite url but it’s not valid anymore

    • Dr_Frankenstein Dr_Frankenstein

      Have a look at the database container logs to see if it says what its doing, the Discord link should be valid as had people join in the last few hours via it..

  2. Richard Richard

    Thanks for the excellent guide, this is my first container on my NAS so I’m very new to this and your guide was so easy to follow.

    Can I ask 2 things though:
    1. How do I add additional drives to scrutiny? I have installed 3 new drives and DSM can see them, but they aren’t appearing in scrutiny.

    2. If I want to add a notification at a later time, how do I do that? Do I just create the ‘scrutiny.yaml’ file into the /docker/scrutiny folder and then just stop/restart scrutiny?

    Thanks

    • Dr_Frankenstein Dr_Frankenstein

      Hey

      So essentially you follow the same instructions to get the new drives paths and add them to the two config files, they should then appear, keep in mind it can take a little while for the scan. Same for notifications you can edit the details at any point or swap them around when ever you need.

      Thanks

      • Richard Richard

        Thanks.

        Is there also a way of giving the drives more meaningful names? Like the Bay number they are in?

        I was wondering if this section might allow this by changing one side of the : ? Eg. – /dev/sata1:Bay#1

        devices:
        – /dev/sata1:/dev/sata1
        – /dev/sata2:/dev/sata2
        – /dev/sata3:/dev/sata3

        • Dr_Frankenstein Dr_Frankenstein

          Yeah, you can add comments using #comment after the lines

          devices:
          – /dev/sata1:/dev/sata1 #bay1

          • Dennis Dennis

            Piggybacking on this comment chain – I would assume this comment is only visible in the yml and not in the interface?

            I am also kind of stuck trying to figure out how to differentiate between two external devices I have hooked up that are identical models. There doesn’t seem to be anything in the syno GUI that shows you serial number for external drives. When you run the terminal command to get the list of devices you get sdu, sdv, etc but that doesn’t really associate with any of the internal syno names either. Is there some way to tell which serial number is usbshare1, usbshare2, etc?

            Thanks for the great guide, easy to follow and I was up and running in minutes.

            • Dr_Frankenstein Dr_Frankenstein

              So how well this works really depends on the USB controller and compatibilty..

              My WD Mybook Duo using two drives as two volumes and then a USB SATA SSD

              I check ls /dev and then make note of the USB items USB1/USB2 etc

              In the yaml I add the devices..

              – /dev/usb1:/dev/usb1
              – /dev/usb2:/dev/usb2
              – /dev/usb3:/dev/usb3

              then in the collector yaml

              – device: /dev/usb1
              type: ‘sat’
              – device: /dev/usb2
              type: ‘sat’
              – device: /dev/usb3
              type: ‘sat’

              This shows me the SSD fine and the first of the two drives but I am always missing drive two in the WD Duo..

      • Richard Richard

        HI,
        I added my additional drives to the \docker\projects\scrutiny\compose.yaml and \docker\scrutiny\collector.yaml filers as per the instructions, and stopped and started the project, but after 48 hours the new drives are still not showing up in the web interface.

        How do I get scrutiny to reload these files please?

      • Richard Richard

        OK, so I worked out I have to stop the project, edit my settings files and then Build the project again to get it to reload the settings. Just stopping and restarting didn’t re-load the settings/drives.

        • Dr_Frankenstein Dr_Frankenstein

          OK nice one – interesting that it required a rebuild, I will add this to the FAQ just in case!

  3. Harald Striepe Harald Striepe

    It looks like the paths in the “Create Projects” settings and the suggested “docker-compose.yml” do not match.

    • Dr_Frankenstein Dr_Frankenstein

      They are correct – one houses the compose file the other folder is for the configs

  4. Harald Striepe Harald Striepe

    I corrected the path to include /volume1/projects/scrutiny.

    It starts without error.

    I still see path references to /opt/scrutiny/config/collector.yaml in the log and /opt/scrutiny/influxdb etc. in the environment.

    Does the container do any automatic mapping to /opt ?

    Do I need a custom scrutiny.yaml to fix this?

  5. Harald Striepe Harald Striepe

    I am ending up with
    Error response from daemon bind mount failed,
    Exit code 1.

    • Dr_Frankenstein Dr_Frankenstein

      Make sure you don’t have a typo in the folder names as this relates to attaching the config folders

      • Harald Striepe Harald Striepe

        Correct. I noticed an error this morning and my compose file still had the old path without projects. The instance now builds and starts.
        Still waiting for the info to populate.
        Is the collector.yaml optional?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

drfrankenstein.co.uk – writing Synology Docker Guides since 2016 – Join My Discord!