diff --git a/404.html b/404.html new file mode 100644 index 0000000..0a12ffa --- /dev/null +++ b/404.html @@ -0,0 +1,49 @@ + + + + + + + + + + + + 404! + + + +
+

Error 404

+
+ +
+
+ +
+
+

You broke the internet

+

This page probably doesn't exist, but if it's supposed to a developer will be looking into why it's borked.

+

Use the navigation above, or below to head to somewhere a little more, functional

+ +
+
+ + + + + diff --git a/README.md b/README.md index 0310594..7feea1d 100644 --- a/README.md +++ b/README.md @@ -18,10 +18,19 @@ semantic, accessable, and snappy. - About Me - OneBag - Design Choices (for website) + - Recipes - A blog with RSS ## Release Notes +### v0.7.0 + +- Add bulk blog articles (mainly server guide) +- Add styling (e.g. tables) used in articles +- Add 404 page +- Add sitemap.xml +- Remove noreferrer from links + ### v0.6.0 - Add blog diff --git a/TODO b/TODO deleted file mode 100644 index b4c3aa1..0000000 --- a/TODO +++ /dev/null @@ -1 +0,0 @@ -change @Aney URL from www.aney.co.uk to aney.co.uk diff --git a/blog/add-domain-to-server.html b/blog/add-domain-to-server.html index 5bfc9bb..c46d70b 100644 --- a/blog/add-domain-to-server.html +++ b/blog/add-domain-to-server.html @@ -40,16 +40,12 @@

Add the A record

There will likely be many option for adding records, but all we need is to add a singular A record

Find the box that allows you to "Add a new record" and input the below, changing and with your IP address, and domain name

-
-					
-					
-				
+ +

If there are not multiple boxes, but instead a single box to input your record into, this will be what you add instead

-
-					
-					
-				
+ +

Wait for propagation

Now there's a bit of a waiting game, as you need to wait for the new DNS record to propagate (get updated) for all nameservers. This can be anywhere from instantly to 72 hours, but typically takes an hour or two.

diff --git a/blog/add-php-to-nginx.html b/blog/add-php-to-nginx.html new file mode 100644 index 0000000..4131972 --- /dev/null +++ b/blog/add-php-to-nginx.html @@ -0,0 +1,75 @@ + + + + + + + + + + + + + Adding PHP to NGINX server + + + +
+

Adding PHP to NGINX server

+
+ +
+
+ +
+
+

PHP is one of the highest used programming languages for websites, and it allows you to add practically any functionality you'd ever want to your sites.

+ +

Install

+
sudo apt install php-fpm php-mysql
+ +

Um, I forgor

+
sudo nano /etc/php//fpm/php.ini
+

Comment the cgi.fix_pathinfo line, to look like below

+
#set cgi.fix_pathinfo = 0
+ +

Add to Website's NGINX conf

+

For each website you want to use php, you'll need to edit the confige file

+
sudo vim /etc/nginx/sites-available/
+

The following code needs adding within the XXX block

+
location ~ \.php$ {
+	include snippets/fastcgi-php.conf;
+	fastcgi_pass unix:/run/php/php-fpm.sock;
+}
+

This will use nginx's fastcgi-php.conf snippet which is more secure by deafult than many other php/nginx configs because it 404s if the files doesn't exist. Read Neal Poole'sDon't trust the tutorials for more info.

+ +

Reload NGINX

+
sudo systemctl reload nginx
+ +

Test it works

+

Create a PHP file e.g. filename.php in the website's directory, and add the snippet below into it

+
+

Go to that webpage in your browser e.g. domain.co.uk/filename.php, and if php is working you should see a dump of your PHP's version, headers, etc.

+ +

Make nginx use index.php as homepage/root

+

Now we'll set nginx to load up index.php as the root of the website, if it exists. Open the site's config with an editor

+
vim /etc/nginx/sites-available/
+

Change the index line to rad as below. This will then tell the server to load index.php, and if it doesn't exists, load index.html in it's stead

+
index index.php index.html
+			
+
+ + + + + diff --git a/blog/adminer-setup.html b/blog/adminer-setup.html new file mode 100644 index 0000000..f8c9b6a --- /dev/null +++ b/blog/adminer-setup.html @@ -0,0 +1,85 @@ + + + + + + + + + + + + + Adminer Setup + + + +
+

Adminer Setup

+
+ +
+
+ +
+
+

Adminer is a simple front-end for your database server that can be access through the browser

+

Pre-Requirements

+

To run adminer, you'll need a webserver with php set up. Thankfully I'm a great guy, and have written about these topics before.

+ + +

Download the latest release

+

This will download the latest release to your default web directory, this can be changed, by altering the path following -O.

+
wget "http://www.adminer.org/latest.php" -O /var/www/html/adminer.php
+ +

Set permissions for your web server

+
chown -R www-data:www-data /var/www/html/adminer.php
+chmod 755 /var/www/html/adminer.php
+ +

Access it

+

Head to your /adminer.php, and you should load into the adminer login. Using your mysql/mariaDB credentials, you can then login, and use the GUI to manage your database(s)

+ +

Make it a directory, not a file

+

Instead of accessing /adminer.php?, we can make it look like /adminer/

+
location /adminer/ {
+	root /var/www/html ;
+	try_files $uri $uri/ /adminer/index.php/$is_args$args ;
+}
+ +

Password Protect

+

An additional level of security, just in case. Using Htaccess, any file, or directory can be password protected

+
sudo apt install apache2-utils
+htpasswd -c /home//.htpasswd admin
+ +

Add to location

+

Add the location of the auth file to the adminer location block

+
auth_basic "Adminer" ;
+auth_basic_user_file /home//.htpasswd ;
+

They block should look like below

+
location /adminer/ {
+	auth_basic "Adminer" ;
+	auth_basic_user_file /home//.htpasswd ;
+	root /var/www/html ;
+	try_files $uri $uri/ /adminer/index.php/$is_args$args ;
+}
+ +
+
+ + + + + diff --git a/blog/backup-mysql-mariadb.html b/blog/backup-mysql-mariadb.html new file mode 100644 index 0000000..65f8b02 --- /dev/null +++ b/blog/backup-mysql-mariadb.html @@ -0,0 +1,52 @@ + + + + + + + + + + + + + Backup MySQL/MariaDB + + + +
+

Backup MySQL/MariaDB

+
+ +
+
+ +
+
+

A database if a huge part of many projects, services, and servers. If something goes wrong, data is wrongly updated/deleted there could be many problems. Thankfully we can make backups to make sure our data is safe.

+

Manual Backup of a DB

+ +

Backup all DBs

+

This follows on from the manual backup, so assumes you have the backup directory created

+

Automate hourly backups

+ +

My backup script

+

I follow the 3-2-1 backup approach, so I want to keep the files in multiple locations

+ +
+
+ + + + + diff --git a/blog/backup-with-cron.html b/blog/backup-with-cron.html new file mode 100644 index 0000000..5c06530 --- /dev/null +++ b/blog/backup-with-cron.html @@ -0,0 +1,113 @@ + + + + + + + + + + + + + Automating Backups with Cronjobs + + + +
+

Automating Backups with Cronjobs

+
+ +
+
+ +
+
+

Backups are wonderful things that save hours upon hours of work, and stress, so long as they're actually made in the first place!

+

Automatically taking backups allows for peace of mind that your work won't be lost forever whilst you go about your normal workflow..

+ +

Create a backup script

+

You can just call rsync, etc. in cron, but I recommend making a backup script (or a few) for each specific type of backup you want to make.

+

Create the file where-ever you want to keep them, for the sake of this, it'll be a scripts directory in your home directory

+
vim ~/scripts/backup_script.sh
+

And add whatever your backup scripts wants to do. If you've no idea, check out my rsync, and rdiff articles first.

+
rdiff-backup $DIRECTORY_TO_BACKUP $DIRECTORY_TO_BACKUP_TO
+rdiff-backup --force --remove-older-than 2W $DIRECTORY_TO_BACKUP_TO
+

The above example will backup a directory, and remove any changes from 2 weeks ago.

+

Now make the script executable

+
chmod +x ~/scripts/backup_script.sh
+ +

Add a cronjob

+

Now for the automation part. Using cron we can set this script to run at many time variations. I recommend crontab guru to learn more about the expressions used for cron.

+

Edit the cron table (crontab)

+
crontab -e
+

And add the following

+
* */2 * * * /home/$USERNAME/scripts/backup_script.sh
+

This will run the backup script every 2 hours, every day

+ +

An advanced backup script

+

An advantage of using a script for backups, is that it allows for more intricate functionality, you may not need to use this functionality, but it's greatly useful.

+

The script below is something I wrote to backup my home directories for each of my servers. It's used to make hourly backups, and send these backups to a remote server daily at midnight.

+
#!/bin/bash
+
+# Set locations to backup/backup to from the flags
+while getopts s:d:b:r:R:n: flag
+do
+        case "${flag}" in
+            d) DATA=${OPTARG};;
+            b) BACKUPDIR=${OPTARG};;
+            r) REMOTE=${OPTARG};;
+            R) REMOTEBACKUP=${OPTARG};;
+            n) NOW=${OPTARG};;
+        esac
+done
+
+# If the backup directory doesn't exist, make it
+mkdir -p $BACKUPDIR
+
+# Incremental backup of the directory locally
+rdiff-backup $DATA $BACKUPDIR
+# Don't keep changes from over 1W ago
+rdiff-backup --force --remove-older-than 1W $BACKUPDIR
+
+# Backup to remote
+# Get the hour/minute time
+TIME=$(date +%H%M)
+
+# If it's a midnight backup, or a manual backup with -n 1 flag set
+if [ "$TIME" = 0000 ] || [ "$NOW" = 1 ]
+then
+        # Create the remote directory for backup if it doesn't exist
+        ssh $REMOTE mkdir -p $REMOTEBACKUP
+
+        # Copy the backup to the remote location
+	# -e ssh makes it secure
+        rsync -azh -e ssh \
+                --delete \
+                $BACKUPDIR \
+                $REMOTE:$REMOTEBACKUP
+fi
+
+

Which is called in the crontab like so

+
# Hourly rdiff-backup of $DIRECTORY_TO_BACKUP
+0 */1 * * * $SCRIPT_LOCATION -d $DIRECTORY_TO_BACKUP -b $LOCATION_TO_SAVE_BACKUP -r $EXTERNAL_SERVER_SSH -R $EXTERNAL_SERVER_BACKUP_LOCATION
+ +

This script can easily be used for many different directories, on each server without needing to change the script itself. All that's needed is to change the cronjob, and/or add another cronjob changing the values it takes.

+ +
+
+ + + + + diff --git a/blog/backup-with-rdiff.html b/blog/backup-with-rdiff.html new file mode 100644 index 0000000..139fa32 --- /dev/null +++ b/blog/backup-with-rdiff.html @@ -0,0 +1,53 @@ + + + + + + + + + + + + + Backup with rdiff-backup + + + +
+

Backup with rdiff-backup

+
+ +
+
+ +
+
+

Like rsync, rdiff-backup is a tool used for incremental backups. Unlike rsync however, rdiff keeps the most-recent file change, along with any previous changes, deletions, etc.

+

Install

+
sudo apt install rdiff-backup
+

Backup

+
rdiff-backup $dir $backup
+

Restore

+
rdiff-backup -r 2D $backup $restore_dir
+

Advanced

+

Only keep backups for a certain time period

+
rdiff-backup --force --remove-older-than 2M $backup
+

This will remove all backups older than 2 months from $backup.

+
+
+ + + + + diff --git a/blog/backup-with-rsync.html b/blog/backup-with-rsync.html new file mode 100644 index 0000000..f84267e --- /dev/null +++ b/blog/backup-with-rsync.html @@ -0,0 +1,84 @@ + + + + + + + + + + + + + Backup with rsync + + + +
+

Backup with rsync

+
+ +
+
+ +
+
+

Rsync is a program that allows for incremental backups. This means that rsync will not create an additional copy of the data when backing up, it will only backup changes to the files/directories, saving bandwidth and storage space.

+ +

Installation

+
sudo apt install rsync
+ +

Backup

+
rsync -azh $ORIGINAL $BACKUP
+

Replace $ORIGINAL with the file/directory to backup, and $BACKUP with the location for the backup to reside.

+

The $BACKUP destination must be a blank directory, an rsync directory, or not currently exist.

+ +

Remote rsync backup

+

If you need to rsync from one PC to another, it's essential the same command, but with the additional layer of ssh

+
rsync -azh -e ssh $ORIGINAL $BACKUP
+

$BACKUP here will be an ssh connection pointed to a location, much like when using scp, so the command will look like

+
rsync -azh -e ssh $ORIGINAL $USER@$HOST:$LOCATION
+

Replacing $USER and $HOST with the username and hostname/IP for the server

+ +

Restore

+

A restore in rsync doesn't require any rsync code per-se, as you can just copy individual files from the backup location to the restore location.

+

Alternatively to restore the entire directory, keeping files that haven't changes, and those that have to the time of the last backup, rsync can do that as below

+
rsync -auv $BACKUP $RESTORE
+

Over the internet

+

Like with backups, these restores can be done over the network/internet too

+
rsync -auv $USER@$HOST:$BACKUP $RESTORE
+ +

Notes/Advanced

+

+-r recursive. All files/directories in the path will be backed up
+-a archive mode. Recursive, but with file permissions, symlinks, etc retained.
+-z compress
+-b backups
+-R relative
+-u update - copy only changed files
+-P progress
+-c compress
+-p preserve permissions
+-h human readable. Make the output readible by humans
+
+ +

Downsides

+

Rsync only keeps one copy of the data, and doesn't keep the changes that were made, making it impossible* to restore a file's contents from the day previous. If this is what you're after, look at rdiff-backup.

+

* Not impossible, as you can set rsync to do this, but it requires a bit of scripting, and isn't as easy as just running the program

+
+
+ + + + + diff --git a/blog/blog-thoughts-220822.html b/blog/blog-thoughts-220822.html new file mode 100644 index 0000000..89db3ae --- /dev/null +++ b/blog/blog-thoughts-220822.html @@ -0,0 +1,63 @@ + + + + + + + + + + + + + Blog Thoughts + + + +
+

Blog Thoughts

+
+ +
+
+ +
+
+

I've been having some thoughts about potential changes to the blog, including layouts, moving existing content, and such.

+ +

Seperate article types?

+

This was something I thought about when initially adding this blog, "what do I want on the blog". It was a thought due to wanting to add different variations of post, e.g. Traditional Article, Guide/Tutorial, Reviews, Thoughts (think twitter length posts), Recipes, etc. and I wondered how to do them. Did I want to have seperate /guides/, /blog/, /recipes/ sections, or lump them all together?

+

The easiest option was just to lump everything together into /blog/, and then start writing what I wanted to put onto the site, and slowly get some content in. I'm glad I did this as it's tricked my little pea-brain into getting at least some content up, but most of it sits in guide territory, and that makes it more difficult to find my actual articles/thoughts (very few as of now, but eh).

+

I have two trains of though in this matter. Which are:

+ +

If I do the former here, I can 301 (into 302 if 100%) the articles to their new locations, but I'm unsure for now, so experimentation is in order.

+ +

Pagination

+

I honestly haven't put any thought into this at all, but as the blog page gets slowly filled with article links, I feel it's a little much for one page. If for instance I write 10,000 articles, that's a lot of searching, and more importantly a lot of page size that doesn't need to exist.

+

I am not using a blogging platform, nor a server-side language, or even a framework like Hugo currently, each page is entirely hand-written, so pagination will be a pain in the backside. For this reason it's on this list, as I need to figure out a decent way to do it, with it nightmare to do, and keep up to date.

+ +

Featured/Pinned

+

A short thought. I could simply leave the pinned articles as-is, with a few articles in a seperate list without styling, or I could style them a little.

+

If the styling option is gone for, it'd be nothing hugely fancy, probably just 3 boxes with the title, and maybe even a background colour. I'm not sure how I feel about it though, as adding ~100 bytes or so to the CSS is unneeded, and the styling could posiblity redact from the rest of the blog page.

+ +
+
+ + + + + diff --git a/blog/certbot-ssl.html b/blog/certbot-ssl.html new file mode 100644 index 0000000..ed4b57b --- /dev/null +++ b/blog/certbot-ssl.html @@ -0,0 +1,64 @@ + + + + + + + + + + + + + Setup SSL with Certbot + + + +
+

Setup SSL with Certbot

+
+ +
+
+ +
+
+

An SSL certificate is used to secure a domain, preventing people from seeing many things, including those entered into forms (username, password, etc.).

+

Install Certbot

+
sudo apt install python3-certbot
+

or

+
sudo apt install python3-certbot-nginx
+ +

Run Certbot

+
sudo certbot --nginx
+

or

+
sudo certbot --nginx -d 
+

I recommend the former command, as it will ask which domain you'd like to setup for, where the latter should be used if you know for certain the domain-name is configured in nginx

+

The first time you run certbot you'll need to enter an email (for alerts), and agree to T&Cs

+

Configure HTTPS

+ +

Auto renew

+

Certificates attained via Certbot are valid for 90 days, so to keep it up indefinitely we'll need to auto-renew before it expires

+

To do this we'll set up a cronjob to run on... . This crontab needs to be run by root, so we'll open the crontab with sudo.

+
sudo crontab -e
+

If it's your first time editing the crontab (as root), it'll ask for your editor of choice

+

When the crontab is open, add a line to the bottom with the following

+
0 0 * * * certbot --nginx renew
+

Exit and save, you'll be imformed the crontab has been changed, and every day the cronjob will auto renew SSL certificates that are due to expire in the next 30 days.

+
+
+ + + + + diff --git a/blog/get-a-domain-name.html b/blog/get-a-domain-name.html new file mode 100644 index 0000000..7d4e894 --- /dev/null +++ b/blog/get-a-domain-name.html @@ -0,0 +1,55 @@ + + + + + + + + + + + + + Get a domain name + + + +
+

Get a domain name

+
+ +
+
+ +
+
+

A domain name, as many will know is what people typing into their browser, e.g. google.com, facebook.com, etc.

+

The primary use for these is to have a memorable thing for users, instead of needing to type the IP address of the server

+ +

Choose a registrar

+

First thing is to choose a registrar (who you are leasing the domain from). You can search for "domain name registrars" and find who is cheapest. So long as they handle DNS (which all I've used do) you're good.

+

I'm currently using tsohost.com, as they're pretty cheap, and besides a few little issues, it works for me.

+ +

Choose a domain name

+

On the registrar's website there will be a section to purchase a domain. Upon clicking this you'll likely be greeted with a searchbar, search for whatever domain you'd like here, and they'll let you know if it's available, and what similar domains there are

+

Select the domain(s) you wish, and add it/them to your cart.

+ +

Purchase your domain name

+

Simply checkout, and make your way through the process

+
+
+ + + + + diff --git a/blog/guide-to-server-hosting.html b/blog/guide-to-server-hosting.html index c8104f1..3436da8 100644 --- a/blog/guide-to-server-hosting.html +++ b/blog/guide-to-server-hosting.html @@ -31,74 +31,92 @@

If you want to start getting into server hosting, system administration, or just want to get a basic minecraft/web server up for you and your friends, then welcome. We all start somewhere, and I would love if I could get your foot in the door.

-

This is a WIP, so I'll be adding to this guide whenever I get time, and will update it's readibility once it's 'complete'.

+

Notice

+

This is heavily a WIP, so I'll be adding to this guide whenever I get time, and will update it's readibility, and correct/add anything missing once it's 'complete'. If I didn't put it up in an unfinished state, it would never go live, so bear with.

Basic Server setup

Now you officially own, and have setup a server. Currently all you can do is SSH into it though, so let's get some services on there

Nginx Webserver

A great first service for any server is a website, even if it's just a little page to let people know you own the server/domain name

MariaDB Database

A database is a great tool to store, access, and filter data. Typically used alongside a website, or other services, but can be useful standalone if you know what you're doing

Backup your server!

-

Backups are super useful. If something breaks, or gets accidentally deleted you can always use a backup to get back it back

+

Backups are super useful. If something breaks, or gets accidentally deleted you can always use a backup to get it back

+ +

Run virtual machines

-

Virtual machines allow you to use your server as multiple servers at once, with different operating systems, services, files, etc.

+

Virtual machines allow you to use your server as multiple servers at once, with different operating systems, services, files, etc. If you're self-hosting this is a great way to separate concerns, having one system for each distinct task.

-

Proxy services to port 80/433

-

Many services you install will be accessible via the web, but will use a different ports. Proxying these allows access (and security) without the need to append a port to the server address

-

Additional services/potential guides

-

Unless there is an anchor, these are all "TODO", and may just be omitted from this guide

+

Unless there is an anchor, these are all "TODO", and may just be omitted from this list

+

Useful tidbits

+ + +

Additional Services

+ +

Game Servers

+ + +

Additional guides

+

These are some guides for specific use-cases, that will aid with setting up +

diff --git a/blog/index.html b/blog/index.html index 21e1473..4800ca5 100644 --- a/blog/index.html +++ b/blog/index.html @@ -38,8 +38,34 @@

2022

diff --git a/blog/debian-server-setup.html b/blog/initial-server-setup.html similarity index 51% rename from blog/debian-server-setup.html rename to blog/initial-server-setup.html index 48f9635..eb9c424 100644 --- a/blog/debian-server-setup.html +++ b/blog/initial-server-setup.html @@ -3,19 +3,19 @@ - + - Debian Server Setup + Initial Server Setup
-

Debian Server Setup

+

Initial Server Setup