Rancher 1.6 allows the use of the old/legacy Cattle engine and if you are still using this old version of Rancher you should probably consider upgrading or using another orchestrator if you do not want to use Kubernetes as Rancher does not support Cattle or Docker Swarm alike simpler solutions.
Provisioning SSL certs on Rancher 1.6 and previous was possible by deploying a service with image janeczku/rancher-letsencrypt:v0.5.0 and provide the correct configuration.
This solution was using the LetsEncrypt ACME v1 API and as this API is not deprecated you will see errors such as
31/07/2021 16:28:42time="2021-07-31T15:28:42Z" level=info msg="Starting Let's Encrypt Certificate Manager v0.5.0 0913231" 31/07/2021 16:28:42time="2021-07-31T15:28:42Z" level=fatal msg="LetsEncrypt client: Could not create client: get directory at 'https://acme-v01.api.letsencrypt.org/directory': acme: Error 403 - urn:acme:error:serverInternal - ACMEv1 is deprecated and you can no longer get certificates from this endpoint. Please use the ACMEv2 endpoint, you may need to update your ACME client software to do so. Visit https://community.letsencrypt.org/t/end-of-life-plan-for-acmev1/88430/27 for more information."
.
There is an alternative to this solution by just using a different image that is implementing the ACMEv2 integration with LetEncrypt.
To deploy the new solution use these steps or follow the description below:
io.rancher.container.agent.role=environment io.rancher.container.create_agent=true
API_VERSION=Production AURORA_ENDPOINT= AURORA_KEY= AURORA_USER_ID= AWS_ACCESS_KEY= AWS_SECRET_KEY= AZURE_CLIENT_ID= AZURE_CLIENT_SECRET= AZURE_RESOURCE_GROUP= AZURE_SUBSCRIPTION_ID= AZURE_TENANT_ID= CERT_NAME=**ENTER DOMAIN NAME** CLOUDFLARE_EMAIL= CLOUDFLARE_KEY= DNSIMPLE_EMAIL= DNSIMPLE_KEY= DNS_RESOLVERS=8.8.8.8:53,8.8.4.4:53 DOMAINS=**ENTER DOMAIN NAME** DO_ACCESS_TOKEN= DYN_CUSTOMER_NAME= DYN_PASSWORD= DYN_USER_NAME= EMAIL=**ENTER YOUR EMAIL** EULA=Yes GANDI_API_KEY= NS1_API_KEY= OVH_APPLICATION_KEY= OVH_APPLICATION_SECRET= OVH_CONSUMER_KEY= PROVIDER=HTTP PUBLIC_KEY_TYPE=RSA-2048 RENEWAL_PERIOD_DAYS=20 RENEWAL_TIME=12 RUN_ONCE=false VULTR_API_KEY=
31/07/2021 17:31:08time="2021-07-31T16:31:08Z" level=info msg="Starting Let's Encrypt Certificate Manager v1.0.0 eb89fad" 31/07/2021 17:31:08time="2021-07-31T16:31:08Z" level=info msg="Generating private key (2048) for myemail@mydomain.com." 31/07/2021 17:31:09time="2021-07-31T16:31:09Z" level=info msg="Creating Let's Encrypt account for myemail@mydomain.com" 31/07/2021 17:31:10time="2021-07-31T16:31:10Z" level=info msg="Using Let's Encrypt Production API"
I hope the above is helpful but do feel free to contact us if you have any feedback or questions.
Your website speed can hugely be affected if you are not taking care of optimising your web images. Optimisation suggestions are provided if you run a test on pagespeed, the google tool to asses your website speed, usability and more.
Running a pagespeed test is as easy as visiting the page and entering your website address. Also lightspeed can be integrated in your CI pipeline if you have the technical and knowledge resources to run these test, but I will write a post about lightspeed integration in another article.
When testing on your site you will get some suggestions on what images size might be if optimised. Also pagespeed provides other suggestions about lazy loading off-screen images but this article will focus on the the use of the convert command to generate optimised images.
As show on https://developers.google.com/speed/docs/insights/OptimizeImages convert can be used on your machine for instance to optimise JPGs as such
convert puzzle.jpg -sampling-factor 4:2:0 -strip -quality 85 -interlace JPEG -colorspace sRGB puzzle_optimised.jpg
To avoid manually running the command with each single photo I have published a small script that uses convert to optimise jpg images present in your current folder.
To run the optimisation you can
The result will yield new images with the _optimised suffix in their name.
An example of an optimised vs original image
Original link to non optimised image and without our website optimisation -> http://www.bizmate.biz/wp-content/uploads/2020/10/IMG_1291_800px.jpg
Original link to optimised image and without our website optimisation -> http://www.bizmate.biz/wp-content/uploads/2020/10/IMG_1291_800px_optimised.jpg
As you can see above this optimisation can make a massive difference. We have seen size reduction of at least 50% in our images and although it might be lower at times it is still a great optimisation.
And can you see any difference between the images?
Recently I had to troubleshoot a disk space problem, hard disk full on our client server, caused by a docker process incorrectly writing files in its local file system instead of forwarding them to stdout.
# df on / it shows 347G used and only 44 G available df -h Filesystem Size Used Avail Use% Mounted on udev 32G 0 32G 0% /dev tmpfs 6.3G 640M 5.7G 10% /run /dev/md2 438G 372G 44G 90% / tmpfs 32G 3.4M 32G 1% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 32G 0 32G 0% /sys/fs/cgroup /dev/md1 488M 486M 0 100% /boot tmpfs 6.3G 0 6.3G 0% /run/user/1001
As visible from the snippet above the /dev/md2 partition has 372G of space in use.
While investigating with du
we realised that it was a container taking a lot of space and thus writing a massive file of about 330G under `/var/lib/docker/containers/fileID.log` .
Although we removed the offending container and its whole stack from the Rancher deployment we realised the file was not being released and the disk space still allocated even if the file was deleted already. As a result we looked for deleted files that were not released yet. In order to look for it you can use lsof
or in our case as it was not installed just use find/ls
.
Indeed you can ls /proc/*/fd
as such
sudo ls -lU /proc/*/fd | grep deleted lr-x------ 1 root root 64 Oct 13 2018 42 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log (deleted) lr-x------ 1 root root 64 Oct 14 2018 123 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log (deleted) lr-x------ 1 root root 64 Oct 15 2018 139 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log (deleted) lr-x------ 1 root root 64 Apr 18 06:00 146 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log (deleted)
and see files marked as deleted but not released yet.
Then you have a few options, you can try to stop/restart the process blocking the file or as in our case, as this might cause a downtime to our client, we would just overwrite the file with empty content. If the file was still present it could be deleted with
: > /path/to/the/file.log
but as the file was yes deleted but still locked up by a process then we can overwrite it by looking up the process ID and the file descriptor and run the overwrite as shown here
: > "/proc/$pid/fd/$fd"
or
sudo sh -c ': > /proc/1233/fd/146'
if you experience permission problems in bash.
To find the process id and the file descriptor you can run
sudo find /proc/*/fd -ls | grep deleted | grep docker
17288 0 lr-x------ 1 root root 64 Oct 13 2018 /proc/1233/fd/42 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log\ (deleted)
1106161 0 lr-x------ 1 root root 64 Oct 14 2018 /proc/1233/fd/123 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log\ (deleted)
4993234 0 lr-x------ 1 root root 64 Oct 15 2018 /proc/1233/fd/139 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log\ (deleted)
1659260673 0 lr-x------ 1 root root 64 Apr 18 06:00 /proc/1233/fd/146 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log\ (deleted)
and as you can see in bold above, the $pid and the $fd values will be visible in this breakdown.
Once you overwrite the content then your filesystem will finally be freed of some extra space.
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 32G 0 32G 0% /dev
tmpfs 6.3G 680M 5.7G 11% /run
/dev/md2 438G 38G 377G 10% /
tmpfs 32G 3.4M 32G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/md1 488M 486M 0 100% /boot
tmpfs 6.3G 0 6.3G 0% /run/user/1001
I hope you like this article, please feel free to share it online or contact us if you have any questions.
Major internet service provider causing extensive business damages to our business due to their unreasonable and hidden service policies.
As a small UK business we run online international software with several users signing up to our service from GMail or from email powered by GSuite, the Google email service provided to run email on their server on behalf of your website/business.
GMail is a great service but as a business we have discovered that they have an internal classification on how handle SPAM that goes as far as rejecting emails. An example of their rejection bounce message response shows
<d**************o@gmail.com>: host gmail-smtp-in.l.google.com[74.125.142.26] said: 550-5.7.1 [xxxxxxxxxxxxx] Our system has detected that this message is 550-5.7.1 likely suspicious due to the very low reputation of the sending 550-5.7.1 domain. To best protect our users from spam, the message has been 550-5.7.1 blocked. Please visit 550 5.7.1 https://support.google.com/mail/answer/188131 for more information. m14si485512pgs.39 – gsmtp (in reply to end of DATA command)
We have now experienced this disservice since the 29th of January 2020 and all our emails are rejected. This is despite following all their guidelines and their suggestion.
The situation is so bad that Google/Gmail users send emails to us and despite their interaction we are still unable to reply back because Gmail blocks all emails.
Below are a few facts/comments on our experience with GMail/GSuite while trying to troubleshoot the problem:
See our support thread on the Google forum with all the details exchanges with product experts and other users confirming all the steps we have taken into trying to troubleshoot the problem and how all the suggestions from them do not work https://support.google.com/mail/thread/27427166 .
If you are also experiencing this problem, managed to solve or you just want to contact us with more information regarding the problem please contact us.
When you are hosting private code, for instance for a reusable component, on Gitlab you will not be able to clone it unless you have access or are authenticated with the GitLab backend and are authorised to the repository.
Access tokens are a great way to allow an alternative way to clone or add your project as a dependency to a parent project. For example you can use tokens as part of your Continuous Integration pipeline to build, test and deploy your project.
If you add a project in your composer.json ( composer is the de facto package manager most used in PHP) , such as
composer.phar require "bizmate/my-private-package:2.*"
You will see something like
Could not fetch https://gitlab.com/api/v4/projects/bizmate%2Fmy-private-package/repository/archive.zip?sha=..., enter your gitlab.com credentials to access private repos A token will be created and stored in "/home/composer/.composer/auth.json", your password will never be stored To revoke access to this token you can visit gitlab.com/profile/applications Username: Password: Bad credentials. You can also manually create a personal token at https://gitlab.com/profile/personal_access_tokens Add it using "composer config --global --auth gitlab-token.gitlab.com <token>"
Using username and password credentials is not a great approach as they are critical information and also because if you have 2FA enabled it might not work.
The only stable solution is to create a Personal Access Token and set it up in the Composer configuration so that composer can build the correct links to clone git repositories from GitLab by adding the right token to the URL.
Inspect your configuration within composer with command
$ composer config --list #or $ composer config --global --list
to fetch the global configuration. To set up the right configuration you can run
composer config --global gitlab-token.gitlab.com a______________________a
again the –global option is present to set it up in the system wide configuration of your PC.
RabbitMQ is a great option for the implementation of an AMQP (Advanced Message Queuing Protocol) queuing system.
Why would you need a queuing system? So that you can offload heavy tasks to be processed in a separate process asynchronously and avoid blocking your client, on your website for instance.
Guzzle is a great wrapper to run Curl requests from your PHP applications
As part of my development requirements for MyReviews.link, I had to implement a fast concurrent way to perform http requests to several servers.