Arch Linux is a lightweight Linux distribution that supports x86-AMD64 architecture, it is based on KISS principle but intended for the proficient Linux user or anyone with sufficient willingness to read the documentation, and use online help.
Arch Linux Arm is dedicated to arm architecture and is a port of Arch Linux. It follows a rolling-release model that endeavor to provide the latest stable version of software.
After using Debian-based distros I thought it was worth a try to test Arch based ones, besides, the low amount of ram on 2GB version of RPI4 needs a minimal base system purposely shaped for specific needs : a small and relatively secure LEMP web server.
sudo fdisk -l
We need two primary partitions, one for boot and the second for rootfs. X means here the letter given for our sd card from command above.
sudo fdisk /dev/sdX
type o to delete all partitions on the sd card
type p to list all partitions. there should be none.
type n then p , and choose 1 for primary partion 1, then enter to accept default first sector and type +100M to define last sector.
type t then c to set partiton to FAT32 file system type.
Now the same procedure for the second partition that will host the roofs :
type n then p , and choose 2 for primary partion 2, then enter two times to accept default first and last sector.
type p to list all partitions. there should be two ones.
type w to apply changes and quit fdisk utility.
2.3 Create file system and mount sd card
sudo mkfs.vfat /dev/sdX1
mkdir boot
mount /dev/sdX1 boot
sudo mkfs.ext4 /dev/sdX2
mkdir root
mount /dev/sdX2 root
sudo su
wget http://os.archlinuxarm.org/os/ArchLinuxARM-rpi-latest.tar.gz
tar -xpf ArchLinuxARM-rpi-latest.tar.gz -C root
sync
Now we have to move the files from /boot to according partition (first one)
mv root/boot/* boot
Unmount the two partitions :
umount boot root
Now, we normally have a fully functional sd card we can insert in the RPI and start it up.
Find ip address of the raspberry using the nmap command, or the router interface
nmap -sP 192.168.1.0/24
Suppose it is 192.168.1.11, we have to use default user alarm and password alarm to login via ssh
ssh alarm@192.168.1.11
and then switch to user root (default password is root)
su
One very important step, is to initialize the pacman keyring and install the Arch Linux ARM signing keys:
pacman-key --init
pacman-key --populate archlinuxarm
set the hostname, edit it using nano editor or use echo command
nano /etc/hostname
echo yourhostname > /etc/hostname
Arch linux arm uses english qwerty keyboard as default, so we have to load correct disposition if we intend to use an attached monitor and keyboard
loadkeys fr-pc
echo "KEYMAP=fr-pc" > /etc/vconsole.conf
then to choose our time zone :
ls /usr/share/zoneinfo/
ln -sf /usr/share/zoneinfo/Africa/Tunis /etc/localtime
To improve security we had to add a user with limited rights
useradd -m -g users -s /bin/bash -G audio,games,lp,optical,power,
scanner,storage,video theuser
then define password for theuser
passwd theuser
sudo will allow theuser to excute commands with administrative rights :
pacman -S sudo
then we have to create a group called sudo and add theuser to this group :
groupadd sudo
usermod -a -G sudo theuser
Then, edit /etc/sudoers to allow members of group sudo to execute commands
visudo
Scroll down until we find this line, and uncomment it by removing the #, go to first character then type x to remove it
## Uncomment to allow members of group sudo to execute any command
#%sudo ALL=(ALL) ALL
we have :
%sudo ALL=(ALL) ALL
After that type esc, then :wq and enter to save the file and exit.
Since we plan to use the RPI as a web server we have to use a fixed local IP. We have to create a file under nano /etc/systemd/network
nano /etc/systemd/network/eth0.network
then type the following :
[Match]
Name=eth0
[Network]
DHCP=no
Address=192.168.1.120/24
Gateway=192.168.1.1
DNS=192.168.1.1
we have now to start the service using :
systemctl start systemd-networkd.service
and then enable it in systemd:
systemctl enable systemd-networkd.service
the last thing that remains is the dns servers:
sudo nano /etc/resolv.conf
we can fill this file with the following or any dns provider you like/trust :
# Generated by NetworkManager
nameserver 8.8.8.8
nameserver 1.1.1.1
nameserver 4.4.4.4
then we can reboot and verify the new fixed local ip
reboot
ip address show
##### 3.4 Change root password and delete user alarm
logged in as root type :
passwd
pkill -KILL -u alarm
then exit then login again using newly created user theuser and delete user alarm :
exit
sudo userdel alarm
then reboot to verify everything is working with theuser
sudo reboot
Once the system is fully functional we can do an update with
pacman -Syy
pacman -Syu
I'm using a 120GB EMTEC SSD drive with a SATA to USB adapter.Once plugged in, we run the fdisk command to partition it as described earlier in section 2 Installation. We will use 3 partitions, one for /boot, the second for rootfs, and the third as backup.
The fdisk command should give :
Disk /dev/sda: 111.79 GiB, 120034123776 bytes, 234441648 sectors
Disk model: 50 120GB
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xa507f927
Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 616447 614400 300M 6 FAT16
/dev/sda2 616448 51816447 51200000 24.4G 83 Linux
/dev/sda3 51816448 234434559 182618112 87.1G 83 Linux
Now we mount the sda1 filesystem temporarily on /mnt or whatever other folder
First we can backup the /boot by mounting sda1 on mnt :
mount /dev/sda1 /mnt
Now the rsync command to copy the whole /boot to sda1 mounted on /mnt
rsync --info=progress2 -axHAX --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} /boot/ /mnt
Now we should display the contents of cmdline.txt, and eventually create a backup
cat /mnt/cmdline.txt
usb-storage.quirks=2109:0711:u root=/dev/mmcblk0p2 rw rootwait console=serial0,115200 console=tty1 selinux=0 plymouth.enable=0 smsc95xx.turbo_mode=N dwc_otg.lpm_enable=0 kgdboc=serial0,115200
According to new boot requirements, we should have :
root=/dev/sda2 rw rootwait console=serial0,115200 console=tty1 selinux=0 plymouth.enable=0 smsc95xx.turbo_mode=N dwc_otg.lpm_enable=0 kgdboc=serial0,115200
/dev/sda2 can be replaced by UUID given by blkid command for sda2.
Now we can safely umount sda1, and verify it is unmounted and folder /mnt is empty:
umount /dev/sda1
Copying the rootfs from sda2 involves following the identical steps.
mount /dev/sda2 /mnt
then :
rsync --info=progress2 -axHAX --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} / /mnt
It can take some minutes depending on the size of rootfs and adapter speed. Once finished :
sync
Next, confirm that the structure has been duplicated.
ls -l /mnt
Now we should display the contents of fstab, and eventually create a backup
cat /mnt/etc/fstab
to modify the file accordingly to new mounts at startup, we should have :
cat /mnt/etc/fstab :
# Static information about the filesystems.
# See fstab(5) for details.`
# <file system> <dir> <type> <options> <dump> <pass>
/dev/sda1 /boot vfat defaults 0 0
UUID=xxxxxx-xxx-xxxx-xxxx-xxxxxxxxxxxxxx /disk3 ext4 defaults,noatime,rw,nofail 0 2
Now we can safely umount sda2, and the verify it is unmounted and folder /mnt is empty:
umount /dev/sda2
ls -l /mnt
shutdown -h now
Then we initiate a proper shutdown to let us remove the sd card, then start the system entirely on the ssd. The sd card should remain as a backup in case of ssd failure. It is advised to use and update it from time to time.
After reboot , the df command should give :
df -h
Filesystem Size Used Avail Use% Mounted on
dev 897M 0 897M 0% /dev
run 933M 592K 933M 1% /run
/dev/sda2 24G 3.2G 20G 14% /
tmpfs 933M 0 933M 0% /dev/shm
tmpfs 933M 4.0K 933M 1% /tmp
/dev/sda1 300M 40M 261M 14% /boot
/dev/sda3 86G 93M 81G 1% /disk3
By signing up for a commercial VPN provider, I now have access to both VPN services and a dedicated IP address. The following commands let us configure automatic startup and connection to vpn provider at boot time.
pacman -S openvpn
then for verifyng install path :
ls -l /etc/openvpn
total 8
drwxr-x--- 2 openvpn network 4096 Mar 19 2021 client
drwxr-x--- 2 openvpn network 4096 Mar 19 2021 server
Find the sample client.conf file we can use as a template :
ls -l /usr/share/openvpn/examples
-rw-r--r-- 1 root root 3589 Oct 6 15:14 /usr/share/openvpn/examples/client.conf
copy the sample file to its place :
cp /usr/share/openvpn/examples/client.conf /etc/openvpn/client/client.conf
change to directory and backup the sample file,then modify it according to provider's data:
cd /etc/openvpn/client
create a file called .secrets with the following content :
nano .secrets
username
password
chmod 600 .secrets
Following this, it is necessary for us to duplicate the providers certificate from our desktop PC to the RPI.
scp /path/to/cert/VPN.crt root@192.168.1.1:/etc/openvpn/client/VPN.crt
Then we can start the service :
systemctl start openvpn-client@client.service
and enable it at boot time :
systemctl enable openvpn-client@client.service
verify the status of this newly created service :
systemctl status openvpn-client@client.service
the command gives:
openvpn-client@client.service - OpenVPN tunnel for client
Loaded: loaded (/usr/lib/systemd/system/openvpn-client@.service; enabled; >
Active: active (running) since Sun 2021-12-26 13:31:03 CET; 5h 0min ago
Docs: man:openvpn(8)
https://community.openvpn.net/openvpn/wiki/Openvpn24ManPage
https://community.openvpn.net/openvpn/wiki/HOWTO
Main PID: 327 (openvpn)
Status: "Initialization Sequence Completed"
Tasks: 1 (limit: 4303)
CPU: 773ms CGroup: /system.slice/system-openvpnx2dclient.slice/openvpn-client@client>
`-327 /usr/bin/openvpn --suppress-timestamps --nobind --config cli>
Dec 26 13:31:06 atriumsch.uk.to openvpn[327]: TUN/TAP device tun0 opened
Dec 26 13:31:06 atriumsch.uk.to openvpn[327]: net_iface_mtu_set: mtu 1500 for t>
Dec 26 13:31:06 atriumsch.uk.to openvpn[327]: net_iface_up: set tun0 up
Dec 26 13:31:06 atriumsch.uk.to openvpn[327]: net_addr_ptp_v4_add: 10.200.115.1>
Dec 26 13:31:06 atriumsch.uk.to openvpn[327]: net_route_v4_add: 41.182.106.45/>
Dec 26 13:31:06 atriumsch.uk.to openvpn[327]: net_route_v4_add: 0.0.0.0/1 via 1>
Dec 26 13:31:06 atriumsch.uk.to openvpn[327]: net_route_v4_add: 128.0.0.0/1 via>
Dec 26 13:31:06 atriumsch.uk.to openvpn[327]: net_route_v4_add: 10.200.115.1/32>
Dec 26 13:31:06 atriumsch.uk.to openvpn[327]: WARNING: this configuration may c>
Dec 26 13:31:06 atriumsch.uk.to openvpn[327]: Initialization Sequence Completed
Wireguard is a new easy and fast protocol to create a vpn, based on most recent and secure cryptographic functions. It is open source and its code is very tiny (4 to 5 thousand lines of code) compared to ipsec or openvpn, it is supported by linux kernel since march 2020 with 5.6 version.
pacman -S linux-headers dkms wireguard-dkms wireguard-tools
then we need to verify ip forwarding :
sysctl net.ipv4.ip_forward
If we get net.ipv4.ip_forward = 0, we have to edit /etc/sysctl.d/99-sysctl.conf
and add :
net.ipv4.ip_forward = 1
Now as a normal user :
mkdir wireguard && cd wireguard
then we set the umask :
umask 077
we first generate server private and public keys :
wg genkey > server_private.key
wg pubkey > server_public.key < server_private.key
Now, the same thing for client keys, we can have as many clients as needed :
wg genkey > client1_private.key
wg pubkey > client1_public.key < client1_private.key
finally we must have a total of 4 keys :
ls
client1_private.key client1_public.key server_private.key server_public.key
Then using the cat command we display the content of each file, which is needed for further configuration of interface:
[theuser@atriumsch wireguard]$ cat server_public.key server_private.key
inQX+okwir8n1UHHrpb9C6dSFOmDUtTEThQJSTqwSE4=
ED5hsO5Td+y1IkMkCjb6Jz+ysXbu1GAN3bVrIg8CR18=
[theuser@atriumsch wireguard]$ cat client1_private.key client1_public.key
MCMDlpuXgqFAUycCjxeGi7gYWfq1ZAni7yH3r2YzWkA=
xf4cAbm6M239agsAvTCaf4rKCCl6wWP8zZLJ55CvRDA=
We can now create and configure wg0 interface on server :
sudo nano /etc/wireguard/wg0.conf
[Interface]
PrivateKey = ED5hsO5Td+y1IkMkCjb6Jz+ysXbu1GAN3bVrIg8CR18=
Address = 10.0.0.1
SaveConfig = false
ListenPort = 51820
PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o tun0 -j MASQUERADE
PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o tun0 -j MASQUERADE
DNS = 8.8.8.8
[Peer]
# CLient 1
PublicKey = xf4cAbm6M239agsAvTCaf4rKCCl6wWP8zZLJ55CvRDA=
AllowedIPs = 10.0.0.2/32
[Peer]
# CLient 2
PublicKey = TB0/TLHB08iRxfAim1hy2VKwgJ/VCDN+cI9P/ux0Py0=
AllowedIPs = 10.0.0.3/32
On the client side : there is no difference between wireguard as a client or server, just configuration of files, so we need to install wireguard and configure the interface wg0 for the client side .
pcuser@PCdesk:~$ sudo cat /etc/wireguard/wg0.conf
[sudo] Mot de passe de pcuser :
[Interface]
Address = 10.0.0.2/24
PrivateKey = MCMDlpuXgqFAUycCjxeGi7gYWfq1ZAni7yH3r2YzWkA=
DNS= 8.8.8.8
[Peer]
Endpoint = 192.168.1.120:51820 # using the local address
#Endpoint = atriumsch.uk.to:51820 # using running server on standard port
PublicKey = inQX+okwir8n1UHHrpb9C6dSFOmDUtTEThQJSTqwSE4= # server public key
AllowedIPs = 0.0.0.0/0, ::/0 # allow ipv4 and ipv6 from external network
PersistentKeepalive = 25
pcuser@PCdesk:~
Finnaly we can launch wireguard with wg-quick
sudo wg-quick up wg0
this command gives normally :
[theuser@atriumsch ~]$ sudo wg-quick up wg0
[sudo] password for pilinarch:
[#] ip link add wg0 type wireguard
[#] wg setconf wg0 /dev/fd/63
[#] ip -4 address add 10.0.0.1 dev wg0
[#] ip link set mtu 1420 up dev wg0
[#] resolvconf -a wg0 -m 0 -x
[#] ip -4 route add 10.0.0.3/32 dev wg0
[#] ip -4 route add 10.0.0.2/32 dev wg0
[#] iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o tun0 -j MASQUERADE
and then sudo wg gives the status of wireguard wg0 interface :
[theuser@atriumsch ~]$ sudo wg
interface: wg0
public key: inQX+okwir8n1UHHrpb9C6dSFOmDUtTEThQJSTqwSE4=
private key: (hidden)
listening port: 51820
peer: xf4cAbm6M239agsAvTCaf4rKCCl6wWP8zZLJ55CvRDA=
allowed ips: 10.0.0.2/32
[theuser@atriumsch ~]$
From the client side we can run the same command to launch wireguard and then verify the status of connection with sudo wg.
To shut down the interface simply run
sudo wg-quick down wg0.
Due to limited resources on the rpi4, nginx was chosen as the more efficient webserver over apache. Whilst nginx is based on event-driven approach that handles multiple request (php, css) into one single thread, apache is process-driven and creates one thread per request which may lead to more ram consumption. Here is a small comparison between the two webservers :
| nginx | apache |
|---|---|
| Event-driven approach (multiple request within one thread) | process driven approach (a new thread for each request. |
| serves static ressources without using php | serves static content using the file based method. |
| doesn't process dynamic content | processes dynamic content within the server. |
| can work as a webserver and reverse proxy | webserver. |
| modules must be compiled within the core | more than 60 modules that can be loaded dynamically. |
To install nginx type :
sudo su
pacman -S nginx-mainline php php-fpm
Then we can enable both nginx and php-fpm
systemctl enable nginx
systemctl start php-fpm
systemctl enable php-fpm
We have to check and modify nginx conf file, which contains global settings
nano /etc/nginx/nginx.conf
We should have
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
server_names_hash_bucket_size 64;
include /etc/nginx/conf.d/*.conf;
}
unlike debian-based systems the following folders are not created, so we will do it manually
mkdir /etc/nginx/sites-available /etc/nginx/sites-enabled /etc/nginx/ssl
also check the path and name of php-fpm-socket which will be used to configure sites-available content file
ls -l /run/php-fpm/
The nginx. conf file reviews the contents of the sites-enabled directory and determines which virtual host files to include. This folder contains symbolic links pointing to the actual vhost file in /etc/nginx/sites-available.
nano /etc/nginx/sites-available/mysite.conf
For a http only site and using Pluxml as a flat-file CMS we should have, according to their site :
server {
listen 80;
server_name localhost;
root /var/www/website;
index index.php;
client_max_body_size 8m; # avoid 413 error when uploading a file
location / {
try_files $uri $uri/ @handler;
}
location @handler {
rewrite ^/(.*)$ /index.php?^$1 last;
}
#PHP
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# NOTE: Use "cgi.fix_pathinfo = 0;" in php.ini
include fastcgi.conf;
fastcgi_index index.php;
fastcgi_pass unix:/run/php-fpm/php-fpm.sock;
}
## REDIRECTIONS
# Flux RSS
location /feed/ {
rewrite ^/feed\/(.*)$ /feed.php?^$1 last;
}
# Sitemap
location = /sitemap.xml {
rewrite .* /sitemap.php;
}
## PROTECT FOLDERS
location ~ /(version|update|readme|data/configuration) {
deny all;
}
# cache-control
location /data/ {
add_header Cache-Control public;
expires 12h;
}
location /core/ {
add_header Cache-Control public;
expires 12h;
}
location /plugins/ {
add_header Cache-Control public;
expires 12h;
}
location /themes/ {
add_header Cache-Control public;
expires 12h;
}
}
Next, create a symbolic link to enable this conf file
ln -s /etc/nginx/sites-available/mysite.conf /etc/nginx/sites-enabled/
Subsequently, we validate the functionality of these 2 commands and rectify any inconsistencies by following the error messages provided.
nginx -t
systemctl restart nginx
With our operational http site now in place, we are ready to progress to full https by generating the necessary keys and certificates and adjusting the configuration file within sites-available.
cd /etc/nginx/ssl
generate a strong rsa key and certificate
openssl genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:4096 -out server.key
chmod 400 server.key
openssl req -new -sha256 -key server.key -out server.csr
openssl x509 -req -days 1095 -in server.csr -signkey server.key -out server.crt
Then go back to /etc/nginx/sites-available/mysite.conf
and add server key and certificate location.
We also need to configure the full HTTPS setup according to the Pluxml site's recommendations (precedent link). This typically involves specifying the SSL protocol version, enabling SSL encryption, setting up SSL certificates.
Additionally, update any references to HTTP URLs to use HTTPS instead. This ensures that all resources on our site are loaded securely over HTTPS.
Once we have completed these steps, save the changes to the mysite.conf file and restart the Nginx server to apply the new HTTPS configuration.
Our website should now be fully secured with HTTPS encryption, providing a safe and secure browsing experience for your visitors.
mysite.conf should look like :
server {
listen 443 ssl http2;
server_name atriumsch.uk.to;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
root /var/www/website;
index index.php index.html;
# client_header_buffer_size 1k;
# client_max_body_size 1m;
client_max_body_size 8m; # avoid 413 error when uploading a file
## BASE
# main rule
location / {
try_files $uri $uri/ @handler;
}
# rewrite to index
location @handler {
rewrite ^/(.*)$ /index.php?^$1 last;
}
# PHP
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# NOTE: Use "cgi.fix_pathinfo = 0;" in php.ini
include fastcgi.conf;
fastcgi_index index.php;
fastcgi_pass unix:/run/php/php7.4-fpm.sock;
}
## REDIRECTIONS
# Flux RSS
location /feed/ {
rewrite ^/feed\/(.*)$ /feed.php?^$1 last;
}
# Sitemap
location = /sitemap.xml {
rewrite .* /sitemap.php;
}
## PROTECT FOLDERS
location ~ /(version|update|readme|data/configuration) {
deny all;
}
## CACHING
# cache-control
location /data/ {
add_header Cache-Control public;
expires 12h;
}
location /core/ {
add_header Cache-Control public;
expires 12h;
}
location /plugins/ {
add_header Cache-Control public;
expires 12h;
}
location /themes/ {
add_header Cache-Control public;
expires 12h;
}
}
server {
listen 80;
server_name nom_du_site;
# redirect all HTTP to HTTPS
return 301 https://$host$request_uri;
}
Since December 2015 the Internet Security Research Group (ISRG) has made available to the public Let's Encrypt, the non-profit certificate authority that provides at no cost and for 3 renewable months X.509 certificates for TLS encryption. Let's Encrypt is sponsored by the Electronic Frontier Foundation (EFF), the Mozilla Foundation, OVH,...
We will use a script to generate a certificate request for our domain. There are many solutions available on the Let's Encrypt ACME clients page with certbot as the recommended one. But installing certbot on a system with low resources such as the RPI can be burdensome as it requires the following dependencies :
python-certbot-doc python3-certbot-apache python3-certbot-nginx python-acme-doc python-configobj-doc python-openssl-doc python3-openssl-dbg certbot python3-acme python3-certbot python3-configargparse python3-configobj python3-icu python3-josepy python3-openssl python3-parsedatetime python3-requests-toolbelt python3-rfc3339 python3-tz python3-zope.component python3-zope.event python3-zope.hookable python3-zope
So, i checked a low-dependency variant and choose getssl. This script is written in bash so it can run on all unix machines and most Linux distros. There are packages for arch linux in the AUR and also binaries for the rpm and dpkg package managers.
The manual installation is quite easy since the script is only one file, so one line is enough to download and make the script executable in the ~/user directory :
curl --silent https://raw.githubusercontent.com/srvrco/getssl/latest/getssl > getssl ; chmod 700 getssl
Then, directly run the script for our domain :
./getssl -c atriumsch.uk.to
This command will create the following folders under .~/user directory :
~/.getssl
~/.getssl/getssl.cfg
~/.getssl/atriumsch.uk.to
~/.getssl/atriumsch.uk.to/getssl.cfg
First we will edit ~/.getssl/getssl.cfg to set the default values for the majority of your certificates.
Then edit ~/.getssl/atriumsch.uk.to/getssl.cfg to have the values for this specific domain (make sure to uncomment and specify correct ACL option, since it is required).
and finally run
getssl atriumsch.uk.to
The output should give us a fake certificate since we are using the staging server (for test purposes). So if the output is relevant with no errors, we can go back to edit the getssl.cfg file in the working directory and uncomment the line #CA="https://acme-v02.api.letsencrypt.org"
to issue a valid certificate.
Then, we don't forget to copy issued certificate and key to nginx ssl folder :
and edit /etc/nginx/sites-available/mysite.conf
to match server key and certificate name .
Finally, verify files and owners under nginx ssl folder :
ls -l /etc/nginx/ssl
Do a proper reboot
shutdown -r now
A flat–file CMS is a platform that does not require a database but rather, saves it's data to a set of text files. Pluxml uses XML files to store its data with additional support for media files and comments.
It is portable, and backing up our website can be done easily in just one command and data stored in a single usb drive.
The admin backend is quite powerful allowing five profiles for user roles. PluXml is available in eleven languages, supports static pages, rss, comments, tags and categories.
Several themes are available, and plugins can be installed within admin panel. It also has a wysiwyg editor to easily write pages.
To download latest version :
cd /var/www/website
wget https://www.pluxml.org/download/pluxml-latest.zip
unzip pluxml-latest.zip
cd
This will extract the archive content to a folder called PluXml, so we have to copy or move its content to parent folder :
cp -a /var/www/website/PluXml/. /var/www/website/
_The -a option is an improved recursive option, that preserve all file attributes, and also preserve symlinks.
The . at end of the source path is a specific cp syntax that allow to copy all files and folders, included hidden ones.
we can then refer to online documentation to install PluXml and do necessary configuration and customisation.