Five years on managing my own cloud provider: lessons learned
It was around the end of 2017 when a chat with my brother Vitor Nazário about the risks on putting sensitive data centralized clouds made us follow a new direction: let’s host our own cloud. Before advancing in the text, some general warnings (from 5 years in the future):
- hosting your own cloud may cause data loss, unless very careful measures are taken
- you will spend a lot of time doing manual linux configuration, manual system maintainance, and also reading lots texts/tutorials on the Internet
- you will likely spend more money (in equipments/tools) than just paying some proprietary cloud provider
- do not incentive friends and family to join your cloud (this is VERY important!)… incentive them to use what they feel more comfortable with (you need to be really motivated about privacy to host a cloud)
- uptime and bandwidth will certainly be less than proprietary clouds, and physical risk of destruction of the cloud is also possible (by fire, water, etc)
So, time to proceed!!
Choosing good hardware and software
We live in an amazing moment of time, where open source technology allows us to build incredible things by ourselves, using many tools that are freely available. So, we looked into the most famous software for hosting private clouds at the time, so as the most suitable hardware.
We wanted something simple to manage (if possible), simple to maintain, with RAID capability (RAID1 at least), and with very low energy consumption. We chose two NAS from Western Digital, the WD My Cloud Ex2 Ultra, that have a simple CPU (ARMv7 Processor) and very limited memory (1GiB). It natively supports two HDs with RAID1 (mirror), so as USB 3.0 for external drives. Note that RAID1 is really important: you don’t want to stop your server when a single HD fails… and it will fail! Also, note that RAID1 is not itself a backup… you need to plug-in at least another hard drive (on USB 3.0?) and frequently make system backups into it (this is the real backup!).
At first, we wanted to keep it as simple as possible, so we tried to use the native software, called My Cloud. The positive side is that it manages most of the hardware configuration by itself, such as the RAID1 (we setup two HDs with 3 TB each as a mirror, so total space available was 3 TB). Unfortunately, we couldn’t efficiently make it work for the many client platforms we needed, such as Android, Linux Ubuntu, Windows, etc, so as to setup Desktop Sync features. So, we installed a Docker inside the NAS (officially supported by WD), and manually configured a Docker container with an Own Cloud server, which is open source and highly supported by community.
Note from 5 years after: during this time we changed technology a little bit, and started using NextCloud instead of OwnCloud for some clients (protocol is the same anyway), although our server was always kept as Own Cloud in order to preserve uptime for our “clients” (family and friends), and mostly due to our lazyness to change something that was already working.
Configuring and Own Cloud server instance in ARM
As long as Docker is available on the system, even in ARM architecture, it is “very easy” to setup the server. Obviously, nothing is ever really easy, but doable, specially for those who already know Docker, and also have familiarity with linux terminal, MySQL database, mount points, port forwarding and cron jobs.
The detailed instructions can be found here: https://github.com/igormcoelho/wd-ultra-mycloud-owncloud
Warning: this may void your warranty (although I’m not certain of this and tried to not mess with anything related to core MyCloud system in the tutorial).
One of the most challenging points to host your own cloud server is that you will need some public domain (or some subdomain), such as personalcloudxxxx.no-ip.com. Unless you have public and fixed ip, you will have to handle some sort of dynamic dns… in my case, instead of using no-ip, I registered a domain with Namecheap, that has amazing support for multiple dynamic dns subdomains. This allowed us to pay for a single domain and register multiple cloud entries (towards a federation of decentralized and private clouds!).
At this point, we emphasize that the idea was to expand into multiple clouds, a real federation of nodes that collaborate and secure information from each other… but on practice, we only managed to keep one online for all these years. A real concern is that some Brazilian cable internet providers do not give valid ips to their clients, and neither manage port forwarding in these cases. So, effectively this breaks any possibility of hosting a web service in our own home! Luckily, my provider in Rio is quite good and efficient, with port forwarding / firewall configuration done quickly, so as the dynamic dns with namecheap.
An important step is to get nice https certificates, which can be done with Let’s Encrypt… but self-signed certificates also work for such private scenarios.
Handling passwords and sensitive data
One strong point of having a secure and private cloud is that you can have peace of mind in storing any kind of information, including password databases. Nowadays, it is strongly recommended to never repeat passwords in websites, due to several leaks that happen every year and that can expose some credentials for important personal services. The are many nice software to handle password databases, that are always encrypted, so users can just put them on some proprietary clouds as well.
In my case, I used KeePassXC for a long time, a nice open-source software that lacks a private cloud for storing password databases, which is fundamental for reusing the same database in both personal computers and mobile devices. So, a private cloud was very welcome in this sense, not just for me, but for friends as well (friends that shared the desire of not putting sensitive data in famous cloud providers).
Funny thing, after some years using this combination KeePassXC+private cloud, I finally decided to migrate passwords into some solution that included the cloud… and the reason that really got me to change is that, sometimes we need to have Emergency Access modes and easy recoverable features, that was currently very hard to implement manually in a private cloud. I won’t recommend any specific software in this regard, but there are several solutions available, some fully or partially open-source, such as Bitwarden, Last Pass, and others (thanks to brother Mateus Nazário for the nice advices).
In any case, I was happy that the private cloud cloud help friends to solve these kinds of problems, so as when “friends of friends” were desperately needing hundreds of GB in cloud, for a day or two, just to allow some photographer to upload thousands of important marriage photos (and proprietary solutions failed to provide that in a efficient way at that moment).
Efficiency of the private cloud
Regarding efficiency, a real friend as REDIS caching on owncloud server… without it, owncloud server is very slow, virtually unusable. On practice, the server is much slower than proprietary clouds (as WD EX2 Ultra is quite limited indeed), but it really works, specially for desktop sync. Even with virtual files in desktop sync, it gives a nice experience for users, but for web browsing direcly on the owncloud server, it feels really slow (so, no online Image Gallery or Collabora Docs in “realtime”).
For monitoring, a very helpful project has been the Uptime Robot (uptimerobot.com) that worked nicely for many years, informing of every power loss by email, so as the automatic system recovery that comes after.
As nothing lasts forever, some day every Hard Drive will crash. Recently, when one of my two HDs crashed, the WD EX2 Ultra turned one blue light into red, for the first time, so I immediately proceeded into buying a new HD for replacement. Unfortunately, this lead to a very poor experience, as the RAID1 failed to activate the new hard drive, and left it as “spare”. The reason is that the other “perfect” drive was not really perfect anymore, with few bad blocks and read errors, due to years of daily operation. This has became a real issue, as the external backup was useful (yes, I have 3 HDs, 2 in RAID1 plus an external backup disk).
Typically, power losses and other problems do not lead to real issues, as system recovers automatically without any problem (in very rare situations, I need to manually turn-off and turn-on the power, so that it may get back online).
No data in RAID1 was lost in the “HD crash event”, even with a single HD left, but I felt it was already time to seriously consider the possibility of some data loss in a near future for the RAID1 (although there are backups in server so as in desktop sync clients). I also realized that handling hundreds of GB make it very hard to manage backups (even incremental ones, specially when BTRFS snapshots didn’t work as expected). So, my advice at this point is: do not manage terabytes of data with such a limited CPU device. If you want to use a private cloud, try to keep data usage as low as possible, as this makes all processes much faster and lower the risks (specially if passwords databases are not present).
Final words (for now)
I thank all the friends that heard we talking hours and hours about this, during all these years… and specially thank family and friends that tested and actually used the system, although not in perfect state. In special, huge thanks to my wife Cris, that trusted me and risked her personal data in such awkward system, also for tolerating all the noise and heat from the equipment during these years. Family and friends are free now to use their favorite cloud solutions, and I support them for what they believe is the best. Personally, I continue this decentralization journey until something really better is invented.