09 Jan

Xiaomi Mi 5s – Overcoming a fake ROM

 

I bought a fancy pants new Xiaomi phone. My Samsung S5 was getting a bit long in the tooth. I wanted another flagship, but Samsung’s ROMs have been bloating up recently and while I’m a custom ROM fanboy, I was getting sick of Cyanogenmod and the occasional app not working with an unlocked bootloader, custom ROM etc.

So I got a Xiaomi Mi 5S for about $400 NZD from Banggood. The phone and case arrived 7 days after I ordered, so far so good. What I didn’t know in advance was Banggood (and other vendors) sneakily put their own ROM on the phone which is rambo jambo’d with Malware, Adware and can’t receive OTA (over the air) software updates – in short, it’s a shit sandwich.

The MIUI forums are a great place to get help, but if you have a Banggood Mi 5s phone with an 8.x.x.x fake ROM – you can probably follow these steps.

  1. First up, confirm you are on a fake ROM. This page is a good way to do so. I had 8.1.1.18(OxxxxCNX) and it looked bad news. I couldn’t do a software update or run Pokemon Go, so I decided I was affected and moved on.
  2. Unlock your Boot Loader. What does this mean? Well, it’s a setting on an Android phone that lets you install custom software. In our case, we’re actually trying to do the opposite, by rolling this dodgy fake ROM back to the official MIUI release. Nevertheless, we need to unlock our BL. Xiaomi makes us do this via logging in to their website and requesting an unlock on our Mi account. Simply create a Mi account and request an unlock. It might take a few days – mine took approx. 72 hours to have a verification SMS.
  3. Once we have the confirmation that the Mi account is enabled for unlocking, we need to download some software (unfortunately you need a Windows PC for this – although I did succeed using a VMWare Fusion VM on OSX). This software is the Unlocking app – we need to log in to the Mi account on the phone and inside this programme simultaneously – otherwise it will stall on 50% in the unlocking process. So – click here to get the latest Mi Flash.
  4. Open Mi Flash programme, turn your phone on in Recovery mode (Hold volume Up/Pwr button at the same time, from when the phone is off). Log in on the application and select Unlock.. This should work smoothly.
  5. Now, we have unlocked the bootloader, we should be able to use the Update application on the phone itself (usually in a folder called Tools). Best bet is to go to the official source and grab a Global Rom (assuming you don’t want a Chinese one) – over here. Once you have the rom, put it on your phone’s internal storage in a folder called downloaded_rom (create one if it doesn’t exist), open the aforementioned Update app and select the .zip file you just uploaded. It will take about 45 mins to work.

Mean buzz – you have a official Xiaomi phone/ROM – not some dodgy crippleware. Instead of asking questions here, I recommend heading to the MIUI forums, already linked. Try and only use official software from Xiaomi/MIUI.. Not everyone out there is friendly.

 

07 Nov

Source Specific Multicast with iperf

oooh iperf ssm

As part of my lab at work, I need to have lots of traffic flying around all over the place. Unicast IPv4 and IPv6, of course – but also traffic in L3VPNs and multicast traffic. Multicast is a big part of the day-to-day traffic load in my production network, so it’s important to be there in the lab too.

I’ve used a variety of tools to generate multicast traffic in the past, more often than not the excellent OMPing. This time, however, I wanted to really chuck some volume around, to make my stats look nice and to really show up on NFSen. iperf is the natural choice for generating lots of traffic in a hurry, I’m using iperf3 to throw about 20Gbit/s around the lab core, but for some reason the developers have removed multicast functionality from the new and improved iperf3.

ASM (or any-source multicast) is (more) traditional multicast. It relies on interested parties joining a group (a multicast address) where any sender can be considered the source (*,G). It works well when your environment is well set up, with a rendezvous point and protocols such as PIM-SM (and MSDP when you wanna go multi-domain). Iperf handles this pretty easily, simply set up a server to ‘listen’ on one end, and a client to send on the other. This can be achieved by having two hosts, connected via a router which acts as the PIM-SM rendezvous point and has IGMPv2 on the interfaces, so the listener can tell it’s router that it’s keen on having a listen.

On the listener:
iperf -s -B 233.255.1.3 -u -f m -i 1

On the sender:
iperf -c 233.255.1.3 -u -b 100m -f m -i 1 -t -1 -T 32

On the listener, we’re telling iperf to be a server (-s), listening to an ASM multicast group (-B), using UDP (-u), formatting the output in megabit/s (-f) and telling us what’s up every 1 second (-i).

On the sender, we’re telling iperf to be a client (-c), using UDP, sending a 100Mbit/s stream (-b), sending forever (-t) and setting the multicast TTL to 32 (-T) – overkill, but hey.

This is a quick and easy way to check a basic ASM setup. I use it to confirm my multi-domain lab setup is working with multicast as I would expect and to generate lots of traffic to amuse myself.


Getting source specific

Source specific multicast is a bit cooler than ASM, because it should work for more people, more easily. If you have a single source (like a TV encoder, or some kind of unique data source) that you want plenty of receivers to see, and you have a nice way of telling them about your source address (like a higher-layer application – or in our case, manual config) – then SSM is the protocol for you.

To make SSM work, you really only need IGMPv3 in your network. Most *nix OS’s and even Windows supports IGMPv3 – usually by default. To check on your *nix host, you can run:
cat /proc/sys/net/ipv4/conf/eth0/force_igmp_version

If that returns a 0, and you’re on a modern-ish OS, you’re good to go. You can force your kernel to use IGMPv3 per interface, with:
echo "3" > /proc/sys/net/ipv4/conf/eth0/force_igmp_version

While most OS’s support IGMPv3 out of the box, plenty of network administrators in your friendly local LAN have probably forgotten to turn it on, or have left it with IGMPv2 and called it a day. No good.

Assuming you’re all good with IGMPv3, you then need to have an application which can listen not only for a particular multicast group address, but also a specific source. iperf, by default, doesn’t support this. Lucky for us, then, noushi has written an extension to iperf that allows for a couple of new flags, to set a specific source and interface. You can grab it here.

If you already had iperf installed, remove it. In Debian/Ubuntu, that would be:
sudo apt-get autoremove iperf -y

Unzip the file from git, then we compile:
./configure
make
sudo make install

Now we can run a client to send multicast as before (but this time, sending to the 232.0.0.0/8 SSM range):
iperf -c 232.1.0.55 -u -b 10m -f m -i 1 -t -1 -T 32 -p 5002

I’m running it on a different port (-p) from default, because I want SSM and ASM at the same time.

For the listener, we need to tell our newly improved iperf to listen for a particular source (the regular old IP address of our sender):
iperf -s -B 232.1.0.55 -u -f m -i 1 -p 5002 -O 192.168.66.22 -X eth0.101

The two new flags are there, along with our SSM multicast group (and new port). The -O flag sets the SSM source and -X tells iperf to use a particular interface. I’m not sure -X is new, but I’ve never used it before – so let’s say it is.

If it’s working, we’ll see a 10Mbit/s SSM stream turn up on the listener.

27 Oct

nfsen + debian + apache = d’oh

I was re-doing one of my lab monitoring tools, a VM that hosted too many sparse and poorly maintained pieces of software. Now re-homing each bit onto its own VM (partially for sanity) – I ended up re-installing the excellent NFSen (a netflow monitoring tool/frontend for nfdump).

The software includes a directory named ‘icons’ in the web root, which doesn’t seem insane to me. What is insane, however, is Apache’s decision (by default!) to include an alias for a folder named ‘icons’ in the root. That means that without knowing it, the NFSen icons folder was being redirected to /usr/share/apache2/…/ whatever. That caused a headache.

To find this out, I ran:
cd /etc/apache2
grep -iR /usr/share *

This told me about the dang alias file, /etc/apache2/mods-available/alias.conf

I went into that file, commented out this dumb default, reset apache and now it’s away laughing.

14 Oct

QoS for your Linux Plex box

FireQos in action

FireQos in action

When Jim Salter posted about FireQos the other day, it made me take note. FireQos is a unix’y firewall ‘for humans’. In my day job, QoS is a complex and multi-faceted thing, requiring tonnes of design, thought and understanding to implement correctly (not to mention hardware). It has dramatic effects on network traffic when set up correctly, but that usually means end-to-end config across a domain, so marking at one end of the network translates to actions all the way through. That’s a bit much for home.

I was interested, as I had a problem. Behind my TV I have an Intel NUC, with a little i3 processor and 802.11n wifi. I use it to torrent things, run a Plex server and be a multi-purpose Linux machine for my own needs the rest of the time. (OwnCloud is still running on the Raspberry Pi 3, mind you). When I was pulling down a delicious new Debian image at 12MB/s, and trying to watch something on Plex (via the PS4), things got a bit choppy. Try to VNC into the box from my laptop to throttle the torrent was always annoying, it could take minutes for the screen to refresh if a very hearty download was going on. Like most nerds, the slightest delay caused by my own setup was slowly tearing me apart.

This is where FireQos comes in. With a very simple install and a couple of minutes of settings out of the way, the performance improved dramatically. All I did was prioritise the traffic for Plex, SSH, VNC and browsing over torrents/anything else – and like magic, everything works smoothly altogether – with no throttling on the torrent client.

Remember before where I said QoS really needs to be end-to-end in the network to make a difference? In this case, not true. By simply tweaking the Linux handling of packets, things have gotten much better with the rest of the network unaware anything is happening. Obviously, this would improve if I had a router that was also participating in the fun, but I don’t.. Yet. At the moment, if another device tries to use the network when a full torrent storm is going on, it’s toast.

Anyhow, check out the FireQos tutorial here, and give it a crack yourself. There’s basically no risk, go nuts.

Here’s my fireqos.conf file, so you can copypasta it if you like.

DEVICE=wlan0
INPUT_SPEED=120000kbit
OUTPUT_SPEED=120000kbit

interface $DEVICE world-in input rate $INPUT_SPEED
interface $DEVICE world-out output rate $OUTPUT_SPEED

interface $DEVICE world-in input rate $INPUT_SPEED
   class interactive commit 20%
 	match udp port 53         
    	match tcp port 22             
    	match icmp                    
    	match tcp sports 5222,5228    
    	match tcp sports 5223

    class plex commit 50%
    	match udp port 1900
    	match tcp port 3005
    	match udp port 5353
    	match tcp port 8324
    	match udp port 32410
    	match udp port 32412
    	match udp port 32413
    	match udp port 32414
    	match tcp port 32469

    class vnc commit 5000kbit
    	match tcp port 5901

   class surfing commit 20%
	match tcp sports 0:1023

   class synacks            
      match tcp syn                    
      match tcp ack                 

   class default

   class torrents
      match dports 6881:6999
      match dport 51414 prio 1 

interface $DEVICE world-out output rate $OUTPUT_SPEED
   class interactive commit 20%
      match udp port 53             
      match tcp port 22             
      match icmp                    
      match tcp dports 5222,5228    
      match tcp dports 5223         

    class plex commit 50%
      match udp port 1900
      match tcp port 3005
      match udp port 5353
      match tcp port 8324
      match udp port 32410
      match udp port 32412
      match udp port 32413
      match udp port 32414
      match tcp port 32469

    class vnc commit 5000kbit
      match tcp port 5901

   class surfing commit 20%
      match tcp dports 0:1023

   class synacks                       
      match tcp syn                    
      match tcp ack                   

   class default

   class torrents
      match dports 6881:6999        
      match dport 51414 prio 1
24 Aug

Best Windows TFTP Client

This is mostly a note to self.

3CDaemon was always my Windows TFTP server of choice, but finding a valid .exe that works on Windows 7 64bit and isn’t riddled with viruses is a problem when you’re in a hurry. Ph Jounin has written the lovely Tftpd64. You can get it from the main site here (and in 32bit flavours if you like). I’ll upload the zip here too, incase it ever goes offline.

I don’t mind if you never want to trust a download from a random blog, but for my own stress levels, now I have it in the cloud 🙂

tftpd64.452

26 Jul

Upgrading Junos issue – not enough space

Quick note, mostly for my own reference down the track.

I have in the lab a MPC3E-NG FQ and a MIC3-100G-DWDM card. To those of you not Juniper ensconced, that’s a chassis-wide slot and a 100Gbit/s DWDM (tunable optic) card. Wowee, I hear you say. Anyway, the 100G DWDM card requires a fairly spicy new version of Junos, one with a new underlying FreeBSD core at the heart. My lab was running on Junos 14.1R5.5, an already pretty recent version – but for 100G across my lab I need to use the DWDM card, inconjunction with some Infinera DWDM kit for good measure.

Normally, a Junos upgrade is fairly painless. In this case however, I was getting errors that I couldn’t understand. Here is how I pushed past them.

When going from 14.x to 15.1F6, Juniper don’t recommend running validation of the software release against your config.

james@mxZ.lab.re1> request system software add /var/tmp/junos-install-mx-x86-64-15.1F6.9.tgz no-validate re1

When I ran this on one RE (RE0) it went through just fine. On RE1 however, I got this:

Installing package ‘/var/tmp/junos-install-mx-x86-64-15.1F6.9.tgz’ …
Verified manifest signed by PackageProductionEc_2016
Verified manifest signed by PackageProductionRSA_2016
Verified manifest signed by PackageProductionRSA_2016
Verified contents.iso
Verified issu-indb.tgz
Verified junos-x86-64.tgz
Verified kernel
Verified metatags
Verified package.xml
Verified pkgtools.tgz
camcontrol: not found
camcontrol: not found
camcontrol: not found
Verified manifest signed by PackageProductionEc_2016
Saving the config files …
NOTICE: uncommitted changes have been saved in /var/db/config/juniper.conf.pre-install
tar: contents/jpfe-wrlinux.tgz: Wrote only 0 of 10240 bytes
tar: Skipping to next header
tar: Archive contains obsolescent base-64 headers
tar: contents/jplatform-ex92xx.tgz: Wrote only 0 of 10240 bytes

(truncated)

tar: Skipping to next header
tar: Error exit delayed from previous errors
ERROR: junos-install fails post-install

This looks a bit like something has gone wrong, but it’s not immediately obvious what it is. I am using a .tgz of Junos 15 that is sitting on my RE1’s /var partition, and it has 2.1 GB hard-drive space free (show system storage)… Queue a few hours of headscratching.

Turns out, my assumption on how much space is actually required when upgrading from Junos 14 to 15 was completely under. I started with about 2GB free, but when I was finally successful I had 11GB free. Once the install was complete, I was down to 7.7G, which means the installation process uses up 3.3GB all by itself. I guess that’s not crazy, as the tarball is 1.9GB to begin with, but the error output didn’t make it clear enough for me, a stupid man.

Here is how I overcame the obvious and got Junos 15.1F6 installed 🙂

1: Jump into shell

james@mxZ.lab.re1> start shell

2: Become root

james@mxZ.lab.re1> su
<enter password>

3: Find all the files that are > 500MB. We’re looking for things in /var

root@mxZ% find / -size +500000

4: Delete any of the old .tgz files from previous releases. If you want to keep them, SCP them off first.

root@mxZ% rm /var/sw/pkg/jinstall64-12.1R2.9-domestic-signed.tgz

etc etc

5: Check you now have > 3.5 GB free. Or go all out like me and have 11GB free, whatever.

6: Upgrade Junos and get back to work!

19 Jul

WordPress on a VPS.. Ugh.

As part of my dumb journey to self-host things (well, on a VPS I pay for), I fired up Apache and chucked a virtual host on it. The plan will be to move all my hosted sites to this server, but for now I’m starting with a fresh WordPress install for a family member’s B&B website.

In the mix we have:

  • Apache2
  • Ubuntu 14.04 LTS 64bit
  • php5
  • MySQL
  • phpmyadmin
  • WordPress latest
  • Sweat coming out of my face

So the basic LAMP install was done without a hitch. Where WordPress started to suck was permissions. By default, somehow, I installed WordPress with the permissions set really way wide open (777 on directories).. I used some commands in the WordPress codex (inside my /var/www/sitename/public_html folder)..

james@chappie:~$ sudo find . -type f -exec chmod 664 {} +
james@chappie:~$ sudo find . -type d -exec chmod 775 {} +
james@chappie:~$ chmod 400 wp-config.php

So, there’s that. Then I found a user couldn’t upload a theme or plugin, because for some reason (despite www-data, the Ubuntu Apache user, having group rights to the files..) WordPress couldn’t write to wp-content/, the folder these things go in by default.

If I ran ‘su www-data touch check’ inside wp-content, the file “check” was created, so it wasn’t a permissions issue as far as I could tell. Weird. I ended up fixing it by explicitly telling WordPress to allow ‘direct’ uploading by adding:

define(‘FS_METHOD’, ‘direct’);

To the wp-config.php file.. All of a sudden.. It works.

I had read elsewhere (a 3 year old mysteriously written comment on WordPress’ own community site) that changing PHP’s config to use FastCGI/php-fpm was the solution.. I am still (and probably always will be) a noob with these web technologies, so I wasn’t really 100% sure what I was doing. I also failed, lucky ‘FS_METHOD’ being set to direct was what I wanted.

Update:

I found that changing the ‘server api’ of PHP5 to FPM/FastCGI (and removing the aformentioned FS_METHOD) also works. I followed the steps listed here (JDawg’s answer): http://askubuntu.com/a/527227/521633

19 Jul

Running your own VPS kind of sucks

I pay a good chunk of change to Dreamhost every year, I have done since I was old enough to have a credit card. It’s handy. They are a pretty chill hosting provider, by and large. I host about 8 or 9 WordPress sites. Some are mine, some for friends. As I get better at unix’y stuff, I’m becoming more of a cheapskate and am thinking that $140NZ or whatever Dreamhost costs per year is a bit steep for a shared hosting platform. I have trialled their more dedicated VPS offering and while it kills performance-wise, it’s too expensive for someone hosting sites on behalf of others for free…

A while back I got a good deal on Zappie Host, based in New Zealand, where most of my ‘clients’ are also based. It’s a dedicated VPS, 2 CPU cores and a gig of RAM for about $7NZ a month. I am also on a 50% off deal for the first year, so it’s a good deal. I’m paying for it in parallel with DH so I can get my shit together and migrate. Part of the motivation for sorting it all out before my next DH payment is due is financial, as I’m paying for 2 hosting solutions and not getting much benefit from the combination.

Some challenges:

  • Picking a web server (Apache or NGINX?!)
  • Picking a database (MariaDB or MySQL!?)
  • To use a cPanel or not?
  • Security
  • Mail (jeesus christ, email is complex!)
  • Virtualisation/separation of different client sites
  • Backups!?
  • Remote access for users
  • Unknown unknowns
  • DNS
  • Goddamn email
  • Resource monitoring

I think I need a coffee. What I want to do is use this list and knock out some posts detailing the crap you need to put up with to save a few bucks and call yourself a server administrator 🙂

Time for you and time for me,
And time yet for a hundred indecisions,
And for a hundred visions and revisions,
Before the taking of a toast and tea.

– T.S. Eliot (lol)

01 Jul

Firing up NetBox

Over at Packet Life, stretch has been talking about his/Digital Ocean’s cool new IPAM/DCIM. It’s open source, being developed like crazy and has some interesting features (for me, the ability to document my lab setup which is getting out of hand seems like a good place to start).

I ran through the installer on Github and squirted the install onto a fresh Ubuntu 14.04 LTS server on AWS. Here are my thoughts…

  • The install instructions were very well written for a fresh OSS product, pretty much nothing failed the first time through. This is reminiscent of other Digital Ocean documentation I have read. I did need to jump out of editing a config file to generate a key, but that’s minor.
  • From an absolute beginner with Linux pov (not quite the boat I’m in, but a boat I was fairly recently in) – there are a few things missing (like when to sudo/be root or not per command). It’s not a noobs setup guide, but it is pretty easy to follow otherwise.
  • The installation/setup takes around 10-15 mins including generating the AWS EC2 host
  • The interface of NetBox looks lovely and clean

That was pretty easy and quite smooth to install. Now it’s up and running I can’t wait to document my lab and post the results up here.

 

29 Apr

Owncloud 9.0.1 on Raspberry Pi 3 – Step by Step

Why?!

I bought a Raspberry Pi 3 on the day it was announced, because I am easily excitable. When it arrived, I tried out a few things like compiling Synergy (much faster than a RPI2!) and the oblig. Quake 3. Once the fun wore off, I thought this might be a good time to finally sort out my cloud storage issues. The issues are as follows:

1) I am mildly concerned about having my data live on someone else’s computer
2) I really like and rely on Dropbox, but my 8GB isn’t enough anymore
3) I am a cheapskate

The solution for this is to self-host a ‘cloud’ storage system. While that’s a bit of a paradox, having a system (with apps!) that can have my files on me wherever I go and upload pictures I take on my phone automatically is too handy to give up – and too risky to have no real control over. The best open-source (free, see point 3 above) solution I’ve found so far is OwnCloud.

Note: If you want to do an OwnCloud install following this post – it doesn’t need to be on a Raspberry Pi 3 – you can do it on pretty much any Debian/Ubuntu server. One day I will move this whole thing to a proper server, but again, see point 3.

Note 2: There are hundreds/thousands/millions of ways to do this task. I am basing this whole thing on Dave Young’s very well written howto on Element14. In fact, you can probably follow that right now and skip my post – I am writing this down for my own benefit and there are a *few* changes in OwnCloud 9.0.1

What?
OK, so what do you need to get this up and running?

  1. A Raspberry Pi 3 Model B – buy one here 
  2. A MicroSD card (Class 10 is speedy, you can get a 200GB one for $80USD at the moment, which is NUTS).
  3. The .img file for Raspbian OS. I suggest using Raspbian Jesse Lite
  4. An internet connection. Any one will do, but a decent 30/10Mbit/s is probably recommended.
  5. A static IP and a domain name (or a dynamic DNS service such as the one offered at system-ns.net)
  6. A keyboard and an HDMI capable monitor or screen. This is just for setting up the Pi
  7. Some basic Unix shell skills.. Although you can just follow along and hopefully I’ll spell everything out
  8. A router/firewall in your house that you can forward ports on

Optional extras:

  1. An external USB HDD (for more space)
  2. A smartphone, for the OwnCloud App

If you have the 8 things above (we’ll cover step 4 in some detail later) – then we’re good to start.

Steps to success

Section 1 – Setting up the Raspberry Pi 3

This section you can skip if you already have a freshly installed Raspberry Pi 3 running Raspbian.

  1. First up, install Raspbian OS on the SD card. Steps to do this for your main PC OS are here.
  2. When you have booted into your newly installed Pi, log in with username pi and password raspberry. You’ll change this soon. I suggest you do this bit with the Pi plugged into a TV or screen, with a keyboard. We’ll SSH in and control the Pi from another machine later on.
  3. Run the raspi-config tool – you can set your hostname, expand the file system (use all of your SD card, not just a couple of GB)

    pi@rasbian:~ $ sudo raspi-config
    

    rpi-confIn there, I suggest you run Option 1 and 2. Inside Option 9, I set the Memory Split to 4MB (this is mostly a headless install, why waste RAM on a GPU that won’t get much use), and enable SSH. I changed the hostname to ‘cloud’, pick a name you like. Finish up, then reboot the Pi

    pi@cloud:~ $ sudo reboot
  4. Find your IP address. I am using the Ethernet interface (physically plugged into a switch), but you could use WiFi if you wanted (Raspberry Pi 3 has in-built WiFi, google will show you how to set it up!)

    pi@cloud:~ $ ip addr | grep eth0 | grep inet
     
       inet 192.168.0.13/24 brd 192.168.0.255 scope global eth0

    Here is a command to show the IP DHCP has given you (Raspbian uses DHCP by default). I could set this manually by editing /etc/network/interfaces and changing eth0 inet DHCP to inet static, but I won’t be doing that. I’ll be assigning a static DHCP lease in my router config to keep my Pi on 192.168.0.13 for good. ANYWAY – my IP is found, so I can SSH in from my main computer and live an easier life.

  5. Log in to your Pi from a terminal. iTerm2 for OSX, Putty for Windows, uhh, Terminal from *nix. I use password-less entry as a security feature and because I’m lazy – if you want a hand setting that up let me know – otherwise, you’re in and can run the following commands:

    pi@cloud:~ $ sudo usermod -a -G www-data www-data
    
  6. pi@cloud:~ $ sudo apt-get install nginx openssl php5-cli php5-sqlite php5-gd php5-common php5-cgi sqlite3 php-pear php-apc php5-curl libapr1 libtool curl libcurl4-openssl-dev php-xml-parser php5 php5-dev php5-gd php5-fpm memcached php5-memcache varnish php5-apcu git
  7. That chunky block will install some key pieces of software for us
    1. NGINX – the web server we’ll be using
    2. PHP5 – PHP, bane of all existence
    3. Lots of PHP bits and pieces, PHP5-apuc is the memory cache we’ll use, for example
    4. git – to allow us to grab a nice SSL certificate from Let’s Encrypt

Section 2 – Configuring and installing other bits

  1. Let’s get an SSL cert, so browsing to our Cloud will show a nice green lock in browser, and the OwnCloud app won’t complain *too much*. Of course, this also helps to keep the contents of the cloud machine nice and secure. I use Let’s Encrypt for free signed certificates – something you couldn’t even dream of 5 years ago. As there isn’t a packaged Let’s Encrypt installer for ARM7 Debian at the moment, we’ll use git to grab one:

    cd 
    git clone https://github.com/letsencrypt/letsencrypt
    cd letsencrypt
    ./letsencrypt-auto --help

    This will grab Let’s Encrypt (the software) and chuck it in a folder in the pi user’s home directory called letsencrypt, then move us in there. It took about 10 minutes on my Pi 3 and reasonable internet connection, your results may vary.

    When that’s installed, we need to break out and get a DNS name (and a dynamic one, at that, as my ISP doesn’t offer static IP addressing)…

  2. Head to System NS, sign up and create a Dynamic DNS name. For the rest of this bit, I’ll refer to my domain as cloudy.system-ns.net. Next thing we need to do is automate the IP<>DNS mapping, as my ISP might pull the rug out and change my allocated IPv4 address at any time. System NS has a guide to do this for your OS (makes sense to do it on the Raspberry Pi, though!) – which can be found here.Basically, you create a file on the Raspberry Pi which tells the Pi to download the System NS page responsible for tying your current public IP to the chosen DNS name. Then you schedule it with crontab to run every 5 minutes. This means in the worst case, you will be a DNS orphan for around 5 minutes (System NS TTL for A records seems to be very short, which helps).
  3. OK – so now we have a domain, let’s get back to sorting out a SSL cert for it. As this Raspberry Pi will be used solely for OwnCloud (as far as the webserver side of things goes) I will generate the certificate for the /var/www/owncloud directory:

    pi@cloud:~ $ sudo mkdir /var/www/owncloud
    pi@cloud:~ $ cd ~
    pi@cloud:~ $ cd letsencrypt/
    pi@cloud:~ $ ./letsencrypt-auto certonly --webroot -w /var/www/owncloud -d cloudy.system-ns.net

    This will go away and pull a cert down from Let’s Encrypt. In fact, it will pull down a public certificate and a private key. They will be (by default) lurking in /etc/letsencrypt/live/cloudy.system-ns.net (you need to be root to look in there, which you can do with ‘sudo su’. Type ‘exit’ to get back to the pi user when you’re done gawking at the nice new certs.

  4. So, that’s some housekeeping done. Next, I’ll steal wholesale from Dave, and give a modified version of his NGINX config (remember, that’s the web server we installed back in section 1).Let’s edit the nginx config file! (Actually, let’s delete everything in it and start again)!

    pi@cloud:~ $ sudo nano /etc/nginx/sites-available/default

    This will open nano, the wusses text editor. I love it, because I can remember how to use it in a hurry. Anyway, delete everything in there (ctrl-k deletes a whole line at a time in nano).  Then edit this, and throw it in:

    upstream php-handler {
    server 127.0.0.1:9000;
    #server unix:/var/run/php5-fpm.sock;
    }
    
    server {
    listen 80;
    server_name cloudy.system-ns.net;
    return 301 https://$server_name$request_uri;  # enforce https
    }
    
    server {
    listen 443 ssl;
    server_name cloudy.system-ns.net>;
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
    
    ssl_certificate /etc/letsencrypt/live/cloudy.system-ns.net/cert.pem;
    ssl_certificate_key /etc/letsencrypt/live/cloudy.system-ns.net/privkey.pem;
    
    # Path to the root of your installation
    root /var/www/owncloud;
    
    client_max_body_size 2000M; # set max upload size
    fastcgi_buffers 64 4K;
    
    rewrite ^/caldav(.*)$ /remote.php/caldav$1 redirect;
    rewrite ^/carddav(.*)$ /remote.php/carddav$1 redirect;
    rewrite ^/webdav(.*)$ /remote.php/webdav$1 redirect;
    
    index index.php;
    error_page 403 /core/templates/403.php;
    error_page 404 /core/templates/404.php;
    
    location = /robots.txt {
    allow all;
    log_not_found off;
    access_log off;
    }
    
    location ~ ^/(?:\.htaccess|data|config|db_structure\.xml|README) {
    deny all;
    }
    location / {
                    # The following 2 rules are only needed with webfinger
                    rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
                    rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last;
    
                    rewrite ^/.well-known/carddav /remote.php/carddav/ redirect;
                    rewrite ^/.well-known/caldav /remote.php/caldav/ redirect;
    
                    rewrite ^(/core/doc/[^\/]+/)$ $1/index.html;
    
                    try_files $uri $uri/ index.php;
    }
    
    location ~ \.php(?:$|/) {
                    fastcgi_split_path_info ^(.+\.php)(/.+)$;
                    include fastcgi_params;
                    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                    fastcgi_param PATH_INFO $fastcgi_path_info;
                    fastcgi_param HTTPS on;
                    fastcgi_pass php-handler;
    }
    
    # Optional: set long EXPIRES header on static assets
    location ~* \.(?:jpg|jpeg|gif|bmp|ico|png|css|js|swf)$ {
                    expires 30d;
                    # Optional: Don't log access to assets
                    access_log off;
    }
    
    }

    You can copy mine and replace all the bits with cloudy, 192.168.0.13 etc..

  5. Now we can change the PHP settings to allow for larger uploads of files. Dave suggests 2GB, so we’ll go with that. In practice you can set it so whatever your filesystem allows. I’ll combo in a couple of steps here (changing where php-fqm listens as well)

    sudo nano /etc/php5/fpm/php.ini
    search for upload_max_filesize and set it to 2000M
    search for post_max_size and set it to 2000M
    
    sudo nano /etc/php5/fpm/pool.d/www.conf
    Then Change:
    listen = /var/run/php5-fpm.sock 
    to :
    listen = 127.0.0.1:9000
    
    sudo nano /etc/dphys-swapfile
    Then Change:
    CONF_SWAPSIZE=100 
    to :
    CONF_SWAPSIZE=512

    I didn’t follow all of these exactly, so I’ll copy-pasta Dave’s advice and leave you to try it out.

  6. Reboot the Pi!

    sudo reboot
    [enter]
  7. Now you need to install OwnCloud (hey, finally!). The way to do it on a Raspberry Pi 3 running Raspbian is simple.
    1. Grab the tarball from here

      pi@cloud:~ $ cd
      pi@cloud:~ $ mkdir downloads
      pi@cloud:~ $ cd downloads/
      pi@cloud:~/downloads $ wget https://download.owncloud.org/community/owncloud-9.0.1.tar.bz2
    2. Extract the files

      pi@cloud:~ $ sudo tar xvf owncloud-9.0.1.tar.bz2
    3. Now you end up with a nice folder called ‘owncloud’. We want to stick that where NGINX is looking for websites, so we’ll move it to /var/www, change the ownership of the folder to belong to the www-data user and delete all the evidence.

      pi@cloud:~/downloads $ sudo mv owncloud/ /var/www
      pi@cloud:~/downloads $ sudo chown -R www-data:www-data /var/www
      pi@cloud:~/downloads $ rm -rf owncloud*
    4. Dave now recommends you edit a couple of files in the /var/www/owncloud folder to tweak the filesizes:

      pi@cloud:~/ $ sudo nano .htaccess 
      Then set the following values to 2000M: 
      Php_value upload_max_filesize 
      Php_value post_max_size 
      Php_value memory_limit 
      
      
      pi@cloud:~/ $ sudo nano .user.ini 
      Then set the following values to 2000M: upload_max_filesize 
      post_max_size 
      memory_limit
    5. Do a final ‘sudo reboot’
    6. Browse to your 192.168.0.13 (whatever you called it) and set up OwnCloud. I have mine set to use the SD card for storage, if you wish to use an external HDD, consult Dave’s excellent post.
  8. When you first load OwnCloud it might complain about a few things, they can usually be solved by installing a memcache, tightening up file/folder permissions or tweaking other security features. As you have a valid SSL cert, you can probably get going pretty well straight away. I’ve added the necessary tweaks to the NGINX config that should get you past most of the pains, at least under Owncloud 9.0.1.
  9. That should be it! I lied, above, though. I use both the SD card and a backup onto an external HDD. This gives me the ability to live without my external HDD if I need it for anything temporarily, and keep 2 copies locally of all my stuff. To do it, I use a nice program called rsync and a little script + crontab to schedule it to run. rsync has a million uses, but one is copying files from one place to another in a really smart way, using deltas (i.e. it only copies the files that have changed). This way, I can schedule it to run every 6 hours, and it only takes as long as transfering the new or changed files to run.
    1. Install rsync!

      pi@cloud:~ $ sudo apt-get install rsync
    2. Create a script to run the sync..

      pi@cloud:~ $ mkdir scripts
      pi@cloud:~ $ cd scripts
      pi@cloud:~ $ nano oc-backup.sh
      
      #!/bin/bash
      
      date
      sudo rsync -avz /var/www/owncloud/ /media/shamu/oc-backup/ | tail -n 2
      exit 0
      
      pi@cloud:~ $ chmod +x oc-backup.sh
      pi@cloud:~ $ crontab -e
      
      Add the following to crontab (which is a bit like nano, or vi, depending on what you select the first time you run it)
      
      0 0,6,12,18 * * * ~/scripts/oc-backup.sh >> /media/shamu/oc-backup.log
    3. Now every 6 hours from midnight that will run, doing an rsync and adding a date-stamped line to the file oc-backup.log. My 1TB external drive is permanently mounted to /media/shamu, by the way.

So, that’s it. Up and running. Mine has been working for 2 weeks now, approx 128MB of RAM free during normal operation (I check a little too often). php-fqm eats the CPU for breakfast when doing things like generating image previews in the app (I use the Android one and it’s great).. But mostly it all works. In fact, today I deleted my Dropbox app and all the files on it – I’m eating my own dogfood here, trusting this setup for better or worse. If you use it, let me know!