Last updated on 10 November 2024
Important or Recent Updates
Historic Updates | Date |
---|---|
Updated to use Container Manager and Projects | 29/05/2023 |
Added additional security option to the compose to restrict the container from gaining new privileges | 25/10/2023 |
Remove the need for setting up the synobridge network and allow the container to sit on its own isolated bridge | 14/07/2024 |
There may be an issue with the CRON kicking in on new installations meaning the UI doesn’t update – to remedy this I have added an override to the compose | 22/07/2024 |
Added note to database as minimum password length is 8 characters. Added example cron schedules for running more that once per day Added FAQ item relating to empty dashboards | 08/11/2024 |
What is Scrutiny?
Scrutiny is a Hard Drive Health Dashboard & Monitoring solution, merging manufacturer provided S.M.A.R.T metrics with real-world failure rates.
Let’s Begin
In this guide I will take you through the steps to get Scrutiny up and running in Docker.
Getting our drive details
We need to get some details about our drives in order for Scrutiny to read their SMART data.
It’s time to get logged into your Diskstation via SSH, in this guide I am using Windows Terminal however the steps will be similar on Mac and Linux,
Head into the DSM Control Panel > Terminal & SNMP and then enable SSH service.
Open up ‘‘Terminal’
Now type ‘ssh’ then your main admin account username @ your NAS IP Address and hit Enter
ssh drfrankenstein@192.168.0.101
You will then be asked to enter the password for the user you used you can either type this or right click in the window to paste (you won’t see it paste the info) then press enter.
Enter the login information for your main Synology user account, you will not be able to see the password as you type it. (If you are using a password manager right-clicking in the window will paste – you won’t be able to see it)
Now we are logged in we just need to do a single command to see our drives, note I am not prefacing this command with sudo as we don’t need the low level detail. You will see permission denied errors, but these can be ignored.
fdisk -l
The output you will see depends on the model NAS you own, the two examples below are from an 1821+ and an 1815+ which have 8 bays and the 1821+ has up to 2 NVMEs.
The 1815+ has 8 drives broken down from sda
to sdh
The 1821+ has 8 drives broken down into SATA and NVME devices, sata1
to sata8
with the nvme0n1
and nvme1n1
. (Note if you have any eSATA devices connected these will also show)
Make note of the devices you see in your output as we will need them for the config file and compose.
USB Drives
If you also want to add USB drives this will depend on whether the Manufacturer of the caddy passes this info on. I have commented out the USB extra parts in the config a bit further on.
Config Files and Folders
Next lets create the folders the container will need. Head into File Station and create a subfolder in the ‘docker’ share called ‘scrutiny’ and then within that another called ‘influxdb’ it should look like the below.
Then if you don’t have one already from my other guides create another folder in the ‘docker’ share called ‘projects’ and within that another one called ‘scrutiny’
Next comes the config files, You can edit this file in a number of ways, but to keep the guide OS-agnostic we will be using the Synology Text Editor package which can be installed via Package Center.
Open up a new text document and paste one of the two code snippets below into it. Use the one that matches up with the way your drives are shown in the previous step (if you come across anything different let me know in the comments!)
Type 1
version: 1
host:
id: ""
devices:
- device: /dev/sata1
type: 'sat'
- device: /dev/sata2
type: 'sat'
- device: /dev/sata3
type: 'sat'
- device: /dev/sata4
type: 'sat'
- device: /dev/sata5
type: 'sat'
- device: /dev/sata6
type: 'sat'
- device: /dev/sata7
type: 'sat'
- device: /dev/sata8
type: 'sat'
- device: /dev/nvme0n1
type: 'nvme'
- device: /dev/nvme1n1
type: 'nvme'
# - device: /dev/usb1
# type: 'sat'
# - device: /dev/usb2
# type: 'sat'
Type 2
version: 1
host:
id: ""
devices:
- device: /dev/sda
type: 'sat'
- device: /dev/sdb
type: 'sat'
- device: /dev/sdc
type: 'sat'
- device: /dev/sdd
type: 'sat'
- device: /dev/sde
type: 'sat'
- device: /dev/sdf
type: 'sat'
- device: /dev/sdg
type: 'sat'
- device: /dev/sdh
type: 'sat'
- device: /dev/nvme0n1
type: 'nvme'
- device: /dev/nvme1n1
type: 'nvme'
# - device: /dev/usb1
# type: 'sat'
# - device: /dev/usb2
# type: 'sat'
You will need to edit the config file in line with the number of drives you had in the output earlier either adding or removing lines accordingly, including adding or removing the NVME drives.
Also, I have included a couple of commented out lines for USB drives if you have them connected.
Next you can save this file as ‘collector.yaml’ in the ‘/docker/scrutiny’ folder.
Notifications Config (optional)
This step is optional and depends on if you want to set up some notifications in case one of your drive has issues.
As of writing there are 14 different notification method, as you can imagine I cannot cover every single type in this guide, but this will get the config file in place for you to amend based on your preferences
Open up a new file Text Editor again, this time you need to copy and paste the full contents of the example config file located here
Scroll to the bottom of the file where you will see a number of config options for notifications. You will need to the remove the # from the ‘notify’ and ‘urls’ lines and then depending on which type of notification you decide to set up the # will need to be removed from the corresponding line.
The level of notification you receive (Critical or All Issues) can be set up in the WebUI once Scrutiny is up and running.
Finally, save this file as ‘scrutiny.yaml’ into the /docker/scrutiny folder.
Docker Compose
We will be using Docker Compose in the Projects section of Container Manager to set up the container.
Open up Container Manager and click on Project then on the right-hand side click ‘Create’.
In the next screen we will set up our General Settings ‘Project Name’ will be ‘scrutiny’ the ‘Path’ click the button and select the folder we created earlier in ‘/docker/projects/scrutiny’. ‘Source:’ change the drop-down to ‘Create docker-compose.yml’.
Next we are going to drop in our docker compose configuration copy all the code in the box below and paste it into line ‘1’ just like the screenshot.
services:
scrutiny:
container_name: scrutiny
image: ghcr.io/analogj/scrutiny:master-omnibus
cap_add:
- SYS_RAWIO
- SYS_ADMIN
ports:
- 6090:8080/tcp # webapp
- 8086:8086/tcp # influxDB admin
volumes:
- /run/udev:/run/udev:ro
- /volume1/docker/scrutiny:/opt/scrutiny/config
- /volume1/docker/scrutiny/influxdb:/opt/scrutiny/influxdb
devices:
- /dev/nvme0n1:/dev/nvme0n1
- /dev/nvme1n1:/dev/nvme1n1
- /dev/sata1:/dev/sata1
- /dev/sata2:/dev/sata2
- /dev/sata3:/dev/sata3
- /dev/sata4:/dev/sata4
- /dev/sata5:/dev/sata5
- /dev/sata6:/dev/sata6
- /dev/sata7:/dev/sata7
- /dev/sata8:/dev/sata8
# - /dev/usb1:/dev/usb1
# - /dev/usb2:/dev/usb2
environment:
- SCRUTINY_WEB_INFLUXDB_TOKEN=ANYLONGSTRING
- SCRUTINY_WEB_INFLUXDB_INIT_USERNAME=A-USERNAME
- SCRUTINY_WEB_INFLUXDB_INIT_PASSWORD=A-PASSWORD
- COLLECTOR_CRON_SCHEDULE=0 23 * * *
security_opt:
- no-new-privileges:true
restart: unless-stopped
As you can see the devices section contains all our drives, you will need to amend this again in line with the config file you created earlier. You will need to amend the paths each side of the : so they match, adding or removing drives accordingly including the NVMEs.
e.g., /dev/sata1:/dev/sata1 or /dev/sda:/dev/sda and so on.
In addition to this you will see in the ‘environment’ section three variables that will need to be updated as outlined below, these secure the database used by scrutiny.
Variable | Value |
---|---|
SCRUTINY_WEB_INFLUXDB_TOKEN | enter a sting of characters you can use almost anything treat it like a password so a nice long string |
SCRUTINY_WEB_INFLUXDB_INIT_USERNAME | This can be anything you like |
SCRUTINY_WEB_INFLUXDB_INIT_PASSWORD | a secure password (min of 8 characters) |
COLLECTOR_CRON_SCHEDULE | 0 23 * * * This overrides the default cron schedule at midnight and runs at 23:00 You can change the schedule to run more than once per day by using https://crontab.cronhub.io/ to get the right code to include. For example 0 * * * * is hourly. |
These 3 values are only required for the first ever setup – you can remove them once Scrutiny is up and running but keep them safe in case you ever need them. Maybe in Vaultwarden!
Once you have made the edits press ‘Next’
You do not need to enable anything on the ‘Web portal settings’ screen click ‘Next’ again.
On the final screen click ‘Done’ which will begin the download of the container image and once downloaded they will be launched!
You will now see Scrutiny running and should have a green status on the left-hand side.
You should now be able to access the Scrutiny WebUI by going to your NAS IP followed by port 6090
e.g., 192.168.0.30:6090
Sometimes it can take a few minutes before all your drives appear, as Scrutiny needs to obtain their information so don’t panic if it’s initially empty. You can now adjust settings for the UI and Notifications in the WebUI.
FAQ (Frequently Asked Questions)
I added extra drives to my config, and they don’t appear in the UI even after waiting
You can overcome this by stopping the overall Project and then rebuilding it via Action > Build in the Container Manager UI
My dashboard is empty and shows no drives
If you have waited until after the initial scan and still no drives appear then you can try triggering a manual scan via SSH
sudo docker exec scrutiny /opt/scrutiny/bin/scrutiny-collector-metrics run
Looking for some help, join our Discord community
If you are struggling with any steps in the guides or looking to branch out into other containers join our Discord community!
Buy me a beverage!
If you have found my site useful please consider pinging me a tip as it helps cover the cost of running things or just lets me stay hydrated. Plus 10% goes to the devs of the apps I do guides for every year.
No data even after changing the CRON. any suggestions?
Hey Pesh
Can you upload a few things for me, so I can see what is up…
The logs from the container, the collector.yaml and also the docker compose as well.. drop them in separate pastes if you need to and drop the link in your reply
https://paste.drfrankenstein.co.uk
I found my issue. Wanted to put it into host mode. This seem to mess up with the DB connection.
OK – Avoid host mode unless it is specifically required for stuff.
hello,
I followed every step of the guide, green light, service starts, I can connect to my DS423+ but it shows no drives…
here is my yaml:
https://paste.drfrankenstein.co.uk/?d474a6fff854770c#DEgPEVy1MEGZVSiw7yLuB8trNBYuSJDSC21pmAr4fsWr
thanks in advance for help!
Hey Andrea
Could I also see your config file please, drop its contents into my paste site and put the URL it gives you into your reply
Thanks
Thanks a lot for the updated guide for DSM7.2, very helpful.
I face the issue, that only 1 of 4 disks is shown on my DS918+. (/dev/sdd is shown)
And also in influxdb it self, only this disk is shown.
How long does it take to initialise? Can this take multiple hours?
Had already a look on the discussion with Richard, this unfortunately also did not solve my issue.
https://paste.drfrankenstein.co.uk/?3c983436527f868f#9JUCPdHpi6iyXFskwZkMCnGQiu9vUxC3Nr8dNtU7p6s9
So I recently learned the scheduled scan takes place at midnight – leave it running and let me know if over the next 24 hours if they all appear – it is usually really fast from my experience but for some it seems to take longer.
Also after multiple days, there is only one disk shown.
I am in the process to exchange all HDs.
Today a new disk was detected, that is good, but the other 3 are still missing.
I run again fdisk -l and smartctl –scan. Interesting is, that in fdisk the order is a bit of a mess at the moment, in smartctl –scan it is ordered.
I think I will reboot the nas, when all new disks are in an the raid has been restored properl.
fdisk -l
fdisk: cannot open /dev/ram0: Permission denied
fdisk: cannot open /dev/ram1: Permission denied
fdisk: cannot open /dev/ram2: Permission denied
fdisk: cannot open /dev/ram3: Permission denied
fdisk: cannot open /dev/ram4: Permission denied
fdisk: cannot open /dev/ram5: Permission denied
fdisk: cannot open /dev/ram6: Permission denied
fdisk: cannot open /dev/ram7: Permission denied
fdisk: cannot open /dev/ram8: Permission denied
fdisk: cannot open /dev/ram9: Permission denied
fdisk: cannot open /dev/ram10: Permission denied
fdisk: cannot open /dev/ram11: Permission denied
fdisk: cannot open /dev/ram12: Permission denied
fdisk: cannot open /dev/ram13: Permission denied
fdisk: cannot open /dev/ram14: Permission denied
fdisk: cannot open /dev/ram15: Permission denied
fdisk: cannot open /dev/sdc: Permission denied
fdisk: cannot open /dev/sdd: Permission denied
fdisk: cannot open /dev/md0: Permission denied
fdisk: cannot open /dev/zram0: Permission denied
fdisk: cannot open /dev/zram1: Permission denied
fdisk: cannot open /dev/zram2: Permission denied
fdisk: cannot open /dev/zram3: Permission denied
fdisk: cannot open /dev/md1: Permission denied
fdisk: cannot open /dev/synoboot: Permission denied
fdisk: cannot open /dev/md2: Permission denied
fdisk: cannot open /dev/mapper/vg1-syno_vg_reserved_area: Permission denied
fdisk: cannot open /dev/mapper/vg1-volume_1: Permission denied
fdisk: cannot open /dev/mapper/cachedev_0: Permission denied
fdisk: cannot open /dev/sda: Permission denied
fdisk: cannot open /dev/sdb: Permission denied
smartctl –scan
/dev/sda -d scsi # /dev/sda, SCSI device
/dev/sdb -d scsi # /dev/sdb, SCSI device
/dev/sdc -d scsi # /dev/sdc, SCSI device
/dev/sdd -d scsi # /dev/sdd, SCSI device
I can not imagine, that the order matters. Or does it matter?
I will check the logs of the container also if all disks are replaced.
Other than that I found out, that I have influx db not on the same port, but that shouldn’t be the issue, as I get partially new data.
– 8087:8086/tcp # influxDB admin
Hey, the order shouldn’t matter – wait for your array to finish and do a reboot and please report back, you may need to adjust/add a line in the config file for the influxdb however the logs will normally tell you if it cant connect.
Thanks a lot for your help!
After reboot and 24h of operations still only one disk shown.
I had a look in the logs: This is suspicious to me (logs below, sorry for the bad formating, no idea if I can format it somehow)
1. smartctl –xall –json –device sat /dev/sdb on synology (ssh) show an error:
smartctl 6.5 (build date Sep 26 2022) [x86_64-linux-4.4.302+] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, http://www.smartmontools.org
=======> UNRECOGNIZED OPTION: json
time=”2024-07-22T00:00:26Z” level=info msg=”Publishing smartctl results for 0x5000c500c325d963\n” type=metrics
time=”2024-07-22T00:00:25Z” level=info msg=”Executing command: smartctl –xall –json –device sat /dev/sdb” type=metrics
time=”2024-07-22T00:00:25Z” level=info msg=”Collecting smartctl results for sdb\n” type=metrics
time=”2024-07-22T00:00:25Z” level=error msg=”An error occurred while publishing SMART data for device (0x5000c500c325d963): Post \”http://localhost:8080/api/device/0x5000c500c325d963/smart\”: context deadline exceeded (Client.Timeout exceeded while awaiting headers)” type=metrics
time=”2024-07-22T00:00:21Z” level=info msg=”127.0.0.1 – 57131787e2f9 [22/Jul/2024:00:00:21 +0000] \”POST /api/device/0x5000c500c325d963/smart\” 200 16 \”\” \”Go-http-client/1.1\” (17909ms)” clientIP=127.0.0.1 hostname=57131787e2f9 latency=17909 method=POST path=/api/device/0x5000c500c325d963/smart referer= respLength=16 statusCode=200 type=web userAgent=Go-http-client/1.1
I am doing some testing this end to see if something has changed in the setup at all to cause the issue as someone on Reddit had the similar issue.
Can you try adding this to your compose to force the cron to run earlier
environment:
– COLLECTOR_CRON_SCHEDULE=0 22 * * *
Amend the 22 to the hour you want to trigger the scan and let me know what happens
Not sure what the problem with the setup is, but the file ‘collector.yaml’ in the config directory is not correctly generated. I stopped the container, corrected the contents of collector.yaml, and then my devices are correctly displayed upon the first startup of the container.
Oh OK – Did the collector change after you initially made amendments to it?
how to add password for web interface
It is not supported out of the box
https://github.com/AnalogJ/scrutiny/issues/34
Hi –
I’m receiving a “bind mount failed: ‘/volume1/docker/projects/scrutiny/influxdb does not exist. Any thoughts?
Thanks,
Have you created that folder? Also check for types and Capitalisation..
ah thanks – i didn’t create the “influxdb” folder under “scrutiny”.