Skip to content

Scrutiny SMART Monitoring in Container Manager on a Synology NAS

Important or Recent Updates
Historic UpdatesDate
Updated to use Container Manager and Projects29/05/2023
Added additional security option to the compose to restrict the container from gaining new privileges25/10/2023
Remove the need for setting up the synobridge network and allow the container to sit on its own isolated bridge14/07/2024
There may be an issue with the CRON kicking in on new installations meaning the UI doesn’t update – to remedy this I have added an overide to the compose/22/07/2024
Historic Updates


What is Scrutiny?

Scrutiny is a Hard Drive Health Dashboard & Monitoring solution, merging manufacturer provided S.M.A.R.T metrics with real-world failure rates.

Let’s Begin

In this guide I will take you through the steps to get Scrutiny up and running in Docker.

Getting our drive details

We need to get some details about our drives in order for Scrutiny to read their SMART data.

It’s time to get logged into your Diskstation via SSH, in this guide I am using Windows Terminal however the steps will be similar on Mac and Linux,

Head into the DSM Control Panel > Terminal & SNMP and then enable SSH service.

Open up ‘‘Terminal’

Now type ‘ssh’ then your main admin account username @ your NAS IP Address and hit Enter

Bash
ssh drfrankenstein@192.168.0.101

You will then be asked to enter the password for the user you used you can either type this or right click in the window to paste (you won’t see it paste the info) then press enter.

Enter the login information for your main Synology user account, you will not be able to see the password as you type it. (If you are using a password manager right-clicking in the window will paste – you won’t be able to see it)

Now we are logged in we just need to do a single command to see our drives, note I am not prefacing this command with sudo as we don’t need the low level detail. You will see permission denied errors, but these can be ignored.

Bash
fdisk -l

The output you will see depends on the model NAS you own, the two examples below are from an 1821+ and an 1815+ which have 8 bays and the 1821+ has up to 2 NVMEs.

The 1815+ has 8 drives broken down from sda to sdh

The 1821+ has 8 drives broken down into SATA and NVME devices, sata1to sata8with the nvme0n1and nvme1n1. (Note if you have any eSATA devices connected these will also show)

Make note of the devices you see in your output as we will need them for the config file and compose.

Config Files and Folders

Next let’s create the folders the container will need. Head into FileStation and create a subfolder in the ‘docker’ share called ‘scrutiny’ and then within that another called ‘influxdb’ it should look like the below.

Then if you don’t have one already from my other guides create another folder in the ‘docker’ share called ‘projects’ and within that another one called ‘scrutiny’

Next comes the config files, You can edit this file in a number of ways, but to keep the guide OS-agnostic we will be using the Synology Text Editor package which can be installed via Package Center.

Open up a new text document and paste one of the two code snippets below into it. Use the one that matches up with the way your drives are shown in the previous step (if you come across anything different let me know in the comments!)

Type 1

YAML
version: 1
host:
  id: ""
devices:
  - device: /dev/sata1
    type: 'sat'
  - device: /dev/sata2
    type: 'sat'
  - device: /dev/sata3
    type: 'sat'
  - device: /dev/sata4
    type: 'sat'
  - device: /dev/sata5
    type: 'sat'
  - device: /dev/sata6
    type: 'sat'
  - device: /dev/sata7
    type: 'sat'
  - device: /dev/sata8
    type: 'sat'
  - device: /dev/nvme0n1
    type: 'nvme'
  - device: /dev/nvme1n1
    type: 'nvme'

Type 2

YAML
version: 1
host:
  id: ""
devices:
  - device: /dev/sda
    type: 'sat'
  - device: /dev/sdb
    type: 'sat'
  - device: /dev/sdc
    type: 'sat'
  - device: /dev/sdd
    type: 'sat'
  - device: /dev/sde
    type: 'sat'
  - device: /dev/sdf
    type: 'sat'
  - device: /dev/sdg
    type: 'sat'
  - device: /dev/sdh
    type: 'sat'
  - device: /dev/nvme0n1
    type: 'nvme'
  - device: /dev/nvme1n1
    type: 'nvme'

You will need to edit the config file in line with the number of drives you had in the output earlier either adding or removing lines accordingly, including adding or removing the NVME drives.

Next you can save this file as ‘collector.yaml’ in the ‘/docker/scrutiny’ folder.

Notifications Config (optional)

This step is optional and depends on if you want to set up some notifications in case one of your drive has issues.

As of writing there are 14 different notification method, as you can imagine I cannot cover every single type in this guide, but this will get the config file in place for you to amend based on your preferences

Open up a new file Text Editor again, this time you need to copy and paste the full contents of the example config file located here

Scroll to the bottom of the file where you will see a number of config options for notifications. You will need to the remove the # from the ‘notify’ and ‘urls’ lines and then depending on which type of notification you decide to set up the # will need to be removed from the corresponding line.

The level of notification you receive (Critical or All Issues) can be set up in the WebUI once Scrutiny is up and running.

Removing the # from the required lines

Finally, save this file as ‘scrutiny.yaml’ into the /docker/scrutiny folder.

Docker Compose

We will be using Docker Compose in the Projects section of Container Manager to set up the container.

Open up Container Manager and click on Project then on the right-hand side click ‘Create’.

In the next screen we will set up our General Settings ‘Project Name’ will be ‘scrutiny’ the ‘Path’ click the button and select the folder we created earlier in ‘/docker/projects/scrutiny’. ‘Source:’ change the drop-down to ‘Create docker-compose.yml’.

Next we are going to drop in our docker compose configuration copy all the code in the box below and paste it into line ‘1’ just like the screenshot.

YAML
services:
  scrutiny:
    container_name: scrutiny
    image: ghcr.io/analogj/scrutiny:master-omnibus
    cap_add:
      - SYS_RAWIO
      - SYS_ADMIN
    ports:
      - 6090:8080/tcp # webapp
      - 8086:8086/tcp # influxDB admin
    volumes:
      - /run/udev:/run/udev:ro
      - /volume1/docker/scrutiny:/opt/scrutiny/config
      - /volume1/docker/scrutiny/influxdb:/opt/scrutiny/influxdb
    devices:
      - /dev/nvme0n1:/dev/nvme0n1
      - /dev/nvme1n1:/dev/nvme1n1
      - /dev/sata1:/dev/sata1
      - /dev/sata2:/dev/sata2
      - /dev/sata3:/dev/sata3
      - /dev/sata4:/dev/sata4
      - /dev/sata5:/dev/sata5
      - /dev/sata6:/dev/sata6
      - /dev/sata7:/dev/sata7
      - /dev/sata8:/dev/sata8
    environment:
      - SCRUTINY_WEB_INFLUXDB_TOKEN=ANYLONGSTRING
      - SCRUTINY_WEB_INFLUXDB_INIT_USERNAME=A-USERNAME
      - SCRUTINY_WEB_INFLUXDB_INIT_PASSWORD=A-PASSWORD
      - COLLECTOR_CRON_SCHEDULE=0 23 * * *
    network_mode: synobridge
    security_opt:
      - no-new-privileges:true
    restart: unless-stopped


As you can see the devices section contains all our drives, you will need to amend this again in line with the config file you created earlier. You will need to amend the paths each side of the : so they match, adding or removing drives accordingly including the NVMEs.

e.g., /dev/sata1:/dev/sata1 or /dev/sda:/dev/sda and so on.

In addition to this you will see in the ‘environment’ section three variables that will need to be updated as outlined below, these secure the database used by scrutiny.

VariableValue
SCRUTINY_WEB_INFLUXDB_TOKENenter a sting of characters you can use almost anything treat it like a password so a nice long string
SCRUTINY_WEB_INFLUXDB_INIT_USERNAMEThis can be anything you like
SCRUTINY_WEB_INFLUXDB_INIT_PASSWORDa secure password
COLLECTOR_CRON_SCHEDULE0 23 * * *

This overrides the default cron schedule at midnight and runs at 23:00

These 3 values are only required for the first ever setup – you can remove them once Scrutiny is up and running but keep them safe in case you ever need them. Maybe in Vaultwarden!

Once you have made the edits press ‘Next’

You do not need to enable anything on the ‘Web portal settings’ screen click ‘Next’ again.

On the final screen click ‘Done’ which will begin the download of the container image and once downloaded they will be launched!

You will now see Scrutiny running and should have a green status on the left-hand side.

You should now be able to access the Scrutiny WebUI by going to your NAS IP followed by port 6090

e.g., 192.168.0.30:6090

Sometimes it can take a few minutes before all your drives appear, as Scrutiny needs to obtain their information so don’t panic if it’s initially empty. You can now adjust settings for the UI and Notifications in the WebUI.

FAQ (Frequently Asked Questions)

I added extra drives to my config, and they don’t appear in the UI even after waiting

You can overcome this by stopping the overall Project and then rebuilding it via Action > Build in the Container Manager UI


Looking for some help, join our Discord community

If you are struggling with any steps in the guides or looking to branch out into other containers join our Discord community!

Buy me a beverage!

If you have found my site useful please consider pinging me a tip as it helps cover the cost of running things or just lets me stay hydrated. Plus 10% goes to the devs of the apps I do guides for every year.

Published inDockerOther Tools 7.2Synology

79 Comments

    • Dr_Frankenstein Dr_Frankenstein

      So I recently learned the scheduled scan takes place at midnight – leave it running and let me know if over the next 24 hours if they all appear – it is usually really fast from my experience but for some it seems to take longer.

      • Helio370 Helio370

        Also after multiple days, there is only one disk shown.
        I am in the process to exchange all HDs.

        Today a new disk was detected, that is good, but the other 3 are still missing.

        I run again fdisk -l and smartctl –scan. Interesting is, that in fdisk the order is a bit of a mess at the moment, in smartctl –scan it is ordered.

        I think I will reboot the nas, when all new disks are in an the raid has been restored properl.

        fdisk -l
        fdisk: cannot open /dev/ram0: Permission denied
        fdisk: cannot open /dev/ram1: Permission denied
        fdisk: cannot open /dev/ram2: Permission denied
        fdisk: cannot open /dev/ram3: Permission denied
        fdisk: cannot open /dev/ram4: Permission denied
        fdisk: cannot open /dev/ram5: Permission denied
        fdisk: cannot open /dev/ram6: Permission denied
        fdisk: cannot open /dev/ram7: Permission denied
        fdisk: cannot open /dev/ram8: Permission denied
        fdisk: cannot open /dev/ram9: Permission denied
        fdisk: cannot open /dev/ram10: Permission denied
        fdisk: cannot open /dev/ram11: Permission denied
        fdisk: cannot open /dev/ram12: Permission denied
        fdisk: cannot open /dev/ram13: Permission denied
        fdisk: cannot open /dev/ram14: Permission denied
        fdisk: cannot open /dev/ram15: Permission denied
        fdisk: cannot open /dev/sdc: Permission denied
        fdisk: cannot open /dev/sdd: Permission denied
        fdisk: cannot open /dev/md0: Permission denied
        fdisk: cannot open /dev/zram0: Permission denied
        fdisk: cannot open /dev/zram1: Permission denied
        fdisk: cannot open /dev/zram2: Permission denied
        fdisk: cannot open /dev/zram3: Permission denied
        fdisk: cannot open /dev/md1: Permission denied
        fdisk: cannot open /dev/synoboot: Permission denied
        fdisk: cannot open /dev/md2: Permission denied
        fdisk: cannot open /dev/mapper/vg1-syno_vg_reserved_area: Permission denied
        fdisk: cannot open /dev/mapper/vg1-volume_1: Permission denied
        fdisk: cannot open /dev/mapper/cachedev_0: Permission denied
        fdisk: cannot open /dev/sda: Permission denied
        fdisk: cannot open /dev/sdb: Permission denied

        smartctl –scan
        /dev/sda -d scsi # /dev/sda, SCSI device
        /dev/sdb -d scsi # /dev/sdb, SCSI device
        /dev/sdc -d scsi # /dev/sdc, SCSI device
        /dev/sdd -d scsi # /dev/sdd, SCSI device

        I can not imagine, that the order matters. Or does it matter?
        I will check the logs of the container also if all disks are replaced.

        Other than that I found out, that I have influx db not on the same port, but that shouldn’t be the issue, as I get partially new data.

        – 8087:8086/tcp # influxDB admin

        • Dr_Frankenstein Dr_Frankenstein

          Hey, the order shouldn’t matter – wait for your array to finish and do a reboot and please report back, you may need to adjust/add a line in the config file for the influxdb however the logs will normally tell you if it cant connect.

          • Helio370 Helio370

            Thanks a lot for your help!

            After reboot and 24h of operations still only one disk shown.

            I had a look in the logs: This is suspicious to me (logs below, sorry for the bad formating, no idea if I can format it somehow)

            1. smartctl –xall –json –device sat /dev/sdb on synology (ssh) show an error:
            smartctl 6.5 (build date Sep 26 2022) [x86_64-linux-4.4.302+] (local build)
            Copyright (C) 2002-16, Bruce Allen, Christian Franke, http://www.smartmontools.org
            =======> UNRECOGNIZED OPTION: json

            time=”2024-07-22T00:00:26Z” level=info msg=”Publishing smartctl results for 0x5000c500c325d963\n” type=metrics
            time=”2024-07-22T00:00:25Z” level=info msg=”Executing command: smartctl –xall –json –device sat /dev/sdb” type=metrics
            time=”2024-07-22T00:00:25Z” level=info msg=”Collecting smartctl results for sdb\n” type=metrics
            time=”2024-07-22T00:00:25Z” level=error msg=”An error occurred while publishing SMART data for device (0x5000c500c325d963): Post \”http://localhost:8080/api/device/0x5000c500c325d963/smart\”: context deadline exceeded (Client.Timeout exceeded while awaiting headers)” type=metrics
            time=”2024-07-22T00:00:21Z” level=info msg=”127.0.0.1 – 57131787e2f9 [22/Jul/2024:00:00:21 +0000] \”POST /api/device/0x5000c500c325d963/smart\” 200 16 \”\” \”Go-http-client/1.1\” (17909ms)” clientIP=127.0.0.1 hostname=57131787e2f9 latency=17909 method=POST path=/api/device/0x5000c500c325d963/smart referer= respLength=16 statusCode=200 type=web userAgent=Go-http-client/1.1

            • Dr_Frankenstein Dr_Frankenstein

              I am doing some testing this end to see if something has changed in the setup at all to cause the issue as someone on Reddit had the similar issue.

            • Dr_Frankenstein Dr_Frankenstein

              Can you try adding this to your compose to force the cron to run earlier
              environment:
              – COLLECTOR_CRON_SCHEDULE=0 22 * * *

              Amend the 22 to the hour you want to trigger the scan and let me know what happens

  1. Kevin Kevin

    Hi –

    I’m receiving a “bind mount failed: ‘/volume1/docker/projects/scrutiny/influxdb does not exist. Any thoughts?

    Thanks,

    • Dr_Frankenstein Dr_Frankenstein

      Have you created that folder? Also check for types and Capitalisation..

      • Kevin Kevin

        ah thanks – i didn’t create the “influxdb” folder under “scrutiny”.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

drfrankenstein.co.uk – writing Synology Docker Guides since 2016 – Join My Discord!