• info@bizmate.biz

Little IT Tricks

LetsEncrypt ACMEv2 SSL certificate provisioning in Rancher 1.6 – legacy

Preface

Rancher 1.6 allows the use of the old/legacy Cattle engine and if you are still using this old version of Rancher you should probably consider upgrading or using another orchestrator if you do not want to use Kubernetes as Rancher does not support Cattle or Docker Swarm alike simpler solutions.

LetsEncrypt on Rancher 1.6 using ACME v2

Provisioning SSL certs on Rancher 1.6 and previous was possible by deploying a service with image janeczku/rancher-letsencrypt:v0.5.0 and provide the correct configuration.

This solution was using the LetsEncrypt ACME v1 API and as this API is not deprecated you will see errors such as

31/07/2021 16:28:42time="2021-07-31T15:28:42Z" level=info msg="Starting Let's Encrypt Certificate Manager v0.5.0 0913231"
31/07/2021 16:28:42time="2021-07-31T15:28:42Z" level=fatal msg="LetsEncrypt client: Could not create client: get directory at 'https://acme-v01.api.letsencrypt.org/directory': acme: Error 403 - urn:acme:error:serverInternal - ACMEv1 is deprecated and you can no longer get certificates from this endpoint. Please use the ACMEv2 endpoint, you may need to update your ACME client software to do so. Visit https://community.letsencrypt.org/t/end-of-life-plan-for-acmev1/88430/27 for more information."

.

There is an alternative to this solution by just using a different image that is implementing the ACMEv2 integration with LetEncrypt.

To deploy the new solution use these steps or follow the description below:

  1. Create a new service (not from the catalogue), give it a name such as MyDomainLetsEncrypt. and use the image vxcontrol/rancher-letsencrypt:v1.0.0
  2. Add these volumes making sure the volumes used for the certificate is the one used to indeed to load the certificates in Rancher
    1. /var/lib/rancher:/var/lib/rancher
    2. MyCertificatesNamedVolume:/etc/letsencrypt – notice that the name of the volume depends on your current setup.
  3. In the “Command” tab, set the “Console” option to none
  4. In the “Labels” tab, create the following 2 labels:
    io.rancher.container.agent.role=environment
    io.rancher.container.create_agent=true
  5. Copy the environment variables template below, add your domain and email values. Then click to add an Environment Variable, and paste the whole block into the first “Variable” input field. All the environment variables will be added
    API_VERSION=Production
    AURORA_ENDPOINT=
    AURORA_KEY=
    AURORA_USER_ID=
    AWS_ACCESS_KEY=
    AWS_SECRET_KEY=
    AZURE_CLIENT_ID=
    AZURE_CLIENT_SECRET=
    AZURE_RESOURCE_GROUP=
    AZURE_SUBSCRIPTION_ID=
    AZURE_TENANT_ID=
    CERT_NAME=**ENTER DOMAIN NAME**
    CLOUDFLARE_EMAIL=
    CLOUDFLARE_KEY=
    DNSIMPLE_EMAIL=
    DNSIMPLE_KEY=
    DNS_RESOLVERS=8.8.8.8:53,8.8.4.4:53
    DOMAINS=**ENTER DOMAIN NAME**
    DO_ACCESS_TOKEN=
    DYN_CUSTOMER_NAME=
    DYN_PASSWORD=
    DYN_USER_NAME=
    EMAIL=**ENTER YOUR EMAIL**
    EULA=Yes
    GANDI_API_KEY=
    NS1_API_KEY=
    OVH_APPLICATION_KEY=
    OVH_APPLICATION_SECRET=
    OVH_CONSUMER_KEY=
    PROVIDER=HTTP
    PUBLIC_KEY_TYPE=RSA-2048
    RENEWAL_PERIOD_DAYS=20
    RENEWAL_TIME=12
    RUN_ONCE=false
    VULTR_API_KEY=
  6. Create the service, once the service is created and running correctly you will see it is producing logs such as
    31/07/2021 17:31:08time="2021-07-31T16:31:08Z" level=info msg="Starting Let's Encrypt Certificate Manager v1.0.0 eb89fad"
    31/07/2021 17:31:08time="2021-07-31T16:31:08Z" level=info msg="Generating private key (2048) for myemail@mydomain.com."
    31/07/2021 17:31:09time="2021-07-31T16:31:09Z" level=info msg="Creating Let's Encrypt account for myemail@mydomain.com"
    31/07/2021 17:31:10time="2021-07-31T16:31:10Z" level=info msg="Using Let's Encrypt Production API"
  7. You can now map the load balancer to redirect calls to port 80 , yourdomain.com and PATH /.well-known/acme-challenge to the service you created above so that it can indeed handle the SSL certificate generation.
  8. Once the certificate is generated do map it on port 443 of your domain load balancer service for the same domain.

I hope the above is helpful but do feel free to contact us if you have any feedback or questions.

 

 

Optimise images for web publishing, improve your user experience and Google pagespeed rating.

Your website speed can hugely be affected if you are not taking care of optimising your web images. Optimisation suggestions are provided if you run a test on pagespeed, the google tool to asses your website speed, usability and more.

Running a pagespeed test is as easy as visiting the page and entering your website address. Also lightspeed can be integrated in your CI pipeline if you have the technical and knowledge resources to run these test, but I will write a post about lightspeed integration in another article.

When testing on your site you will get some suggestions on what images size might be if optimised. Also pagespeed provides other suggestions about lazy loading off-screen images but this article will focus on the the use of the convert command to generate optimised images.

As show on https://developers.google.com/speed/docs/insights/OptimizeImages convert can be used on your machine for instance to optimise JPGs as such

convert puzzle.jpg -sampling-factor 4:2:0 -strip -quality 85 -interlace JPEG -colorspace sRGB puzzle_optimised.jpg

To avoid manually running the command with each single photo I have published a small script that uses convert to optimise jpg images present in your current folder.

To run the optimisation you can

  1. git clone the repository `git clone git@github.com:bizmate/bash-essentials.git ~/bash-essentials`
  2. cd the folder with your last photoshoot or products folder
  3. run `~/bash-essentials/bin/web-image-optimiser.sh` <- notice you can place the bash script anywhere in your machine as long as you call it when you are inside the images folder

The result will yield new images with the _optimised suffix in their name.

An example of an optimised vs original image

Sicilian flavour original flour image

Sicilian flavour original flour image 41 Kb

Original link to non optimised image and without our website optimisation -> http://www.bizmate.biz/wp-content/uploads/2020/10/IMG_1291_800px.jpg

Sicilian flavour optimised for web flour image

Sicilian flavour optimised for web flour image is only 21 kB compared to the 41kB of the original image.

Original link to optimised image and without our website optimisation -> http://www.bizmate.biz/wp-content/uploads/2020/10/IMG_1291_800px_optimised.jpg

As you can see above this optimisation can make a massive difference. We have seen size reduction of at least 50% in our images and although it might be lower at times it is still a great optimisation.

And can you see any difference between the images?

Clear up space/delete file being held up by a process in linux

Recently I had to troubleshoot a disk space problem, hard disk full on our client server, caused by a docker process incorrectly writing files in its local file system instead of forwarding them to stdout.

# df on / it shows 347G used and only 44 G available
df -h
Filesystem      Size  Used Avail Use% Mounted on
udev             32G     0   32G   0% /dev
tmpfs           6.3G  640M  5.7G  10% /run
/dev/md2        438G  372G   44G  90% /
tmpfs            32G  3.4M   32G   1% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs            32G     0   32G   0% /sys/fs/cgroup
/dev/md1        488M  486M     0 100% /boot
tmpfs           6.3G     0  6.3G   0% /run/user/1001

As visible from the snippet above the /dev/md2 partition has 372G of space in use.

While investigating with du we realised that it was a container taking a lot of space and thus writing a massive file of about 330G under `/var/lib/docker/containers/fileID.log`  .

Although we removed the offending container and its whole stack from the Rancher deployment we realised the file was not being released and the disk space still allocated even if the file was deleted already. As a result we looked for deleted files that were not released yet. In order to look for it you can use lsof or in our case as it was not installed just use find/ls .

Indeed you can ls /proc/*/fd as such

sudo ls -lU /proc/*/fd | grep deleted 
lr-x------ 1 root root 64 Oct 13 2018 42 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log (deleted)
lr-x------ 1 root root 64 Oct 14 2018 123 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log (deleted)
lr-x------ 1 root root 64 Oct 15 2018 139 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log (deleted)
lr-x------ 1 root root 64 Apr 18 06:00 146 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log (deleted)

and see files marked as deleted but not released yet.

Then you have a few options, you can try to stop/restart the process blocking the file or as in our case, as this might cause a downtime to our client, we would just overwrite the file with empty content. If the file was still present it could be deleted with

: > /path/to/the/file.log

but as the file was yes deleted but still locked up by a process then we can overwrite it by looking up the process ID and the file descriptor and run the overwrite as shown here

: > "/proc/$pid/fd/$fd"

or

sudo sh -c ': > /proc/1233/fd/146'

if you experience permission problems in bash.

To find the process id and the file descriptor you can run

sudo find /proc/*/fd -ls | grep  deleted | grep docker
17288      0 lr-x------   1 root       root             64 Oct 13  2018 /proc/1233/fd/42 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log\ (deleted)
1106161      0 lr-x------   1 root       root             64 Oct 14  2018 /proc/1233/fd/123 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log\ (deleted)
4993234      0 lr-x------   1 root       root             64 Oct 15  2018 /proc/1233/fd/139 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log\ (deleted)
1659260673      0 lr-x------   1 root       root             64 Apr 18 06:00 /proc/1233/fd/146 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log\ (deleted)

and as you can see in bold above, the $pid and the $fd values will be visible in this breakdown.

Once you overwrite the content then your filesystem will finally be freed of some extra space.

$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 32G 0 32G 0% /dev
tmpfs 6.3G 680M 5.7G 11% /run
/dev/md2 438G 38G 377G 10% /
tmpfs 32G 3.4M 32G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/md1 488M 486M 0 100% /boot
tmpfs 6.3G 0 6.3G 0% /run/user/1001

I hope you like this article, please feel free to share it online or contact us if you have any questions.

GitLab Clone Private Repository with Access Token

When you are hosting private code, for instance for a reusable component, on Gitlab you will not be able to clone it unless you have access or are authenticated with the GitLab backend and are authorised to the repository.

Access tokens are a great way to allow an alternative way to clone or add your project as a dependency to a parent project. For example you can use tokens as part of your Continuous Integration pipeline to build, test and deploy your project.

If you add a project in your composer.json ( composer is the de facto package manager most used in PHP) , such as

composer.phar require "bizmate/my-private-package:2.*"

You will see something like

Could not fetch https://gitlab.com/api/v4/projects/bizmate%2Fmy-private-package/repository/archive.zip?sha=..., enter your gitlab.com credentials to access private repos
A token will be created and stored in "/home/composer/.composer/auth.json", your password will never be stored
To revoke access to this token you can visit gitlab.com/profile/applications
Username: 
Password: 
Bad credentials.
You can also manually create a personal token at https://gitlab.com/profile/personal_access_tokens
Add it using "composer config --global --auth gitlab-token.gitlab.com <token>"

Using username and password credentials is not a great approach as they are critical information and also because if you have 2FA enabled it might not work. So I created a personal access token and I tried adding it through the composer command but this did not work somehow. This might be due to the complexity also of running a development environment in docker but indeed I still found a simpler way to get around this.

Just add your access token to your git configuration, as per command below.

git config --global gitlab.accesstoken {TOKEN_VALUE}

The above will work straight away and you will be able to add your project in composer straight away without adding any information in its auth.json file.

phpbrew install php 7.2 with almost everything

$ phpbrew install 7.2 +everything -dtrace -pgsql -tidy

AWS SES Credentials Generation

Simple Email Service (SES) is one of the great AWS services for email

As per original instructions, credentials for this service can be generated from the console or you could generate it from your user credentials, AWS Secret Access Key.

Mount a AWS S3 bucket on a VM running a FTP service, and spin it also on AWS EC2

This is a drive through the requirement to install an AWS S3 Bucket on a Virtual Machine, on your PC and provisioned by VirtualBox, with an FTP service that can make use of the AWS S3 Bucket for persistent distributed storage. The reason to have a S3 bucket is to allow the use of a single persistent storage for several services (for instance multiple FTP servers) running for different clients and exposed by a dedicated virtual machine.

Enable a WIFI POS from your Windows computer

Please note the below instructions are for Windows only

You might want to enable access to your Windows laptop or to the internet through a Wifi network.

update git on ubuntu server

Are you stuck with an old version of git on your ubuntu server? Please continue reading below.

Mysql – enable table logging in general_log

Sometimes in DB used by different applications you might like to monitor what queries are executed and see them directly on your mysql client, like SqlYog or Sequel Pro.