Your website speed can hugely be affected if you are not taking care of optimising your web images. Optimisation suggestions are provided if you run a test on pagespeed, the google tool to asses your website speed, usability and more.
Running a pagespeed test is as easy as visiting the page and entering your website address. Also lightspeed can be integrated in your CI pipeline if you have the technical and knowledge resources to run these test, but I will write a post about lightspeed integration in another article.
When testing on your site you will get some suggestions on what images size might be if optimised. Also pagespeed provides other suggestions about lazy loading off-screen images but this article will focus on the the use of the convert command to generate optimised images.
As show on https://developers.google.com/speed/docs/insights/OptimizeImages convert can be used on your machine for instance to optimise JPGs as such
convert puzzle.jpg -sampling-factor 4:2:0 -strip -quality 85 -interlace JPEG -colorspace sRGB puzzle_optimised.jpg
To avoid manually running the command with each single photo I have published a small script that uses convert to optimise jpg images present in your current folder.
To run the optimisation you can
The result will yield new images with the _optimised suffix in their name.
An example of an optimised vs original image
Sicilian flavour original flour image 41 Kb
Original link to non optimised image and without our website optimisation -> http://www.bizmate.biz/wp-content/uploads/2020/10/IMG_1291_800px.jpg
Sicilian flavour optimised for web flour image is only 21 kB compared to the 41kB of the original image.
Original link to optimised image and without our website optimisation -> http://www.bizmate.biz/wp-content/uploads/2020/10/IMG_1291_800px_optimised.jpg
As you can see above this optimisation can make a massive difference. We have seen size reduction of at least 50% in our images and although it might be lower at times it is still a great optimisation.
And can you see any difference between the images?
Recently I had to troubleshoot a disk space problem, hard disk full on our client server, caused by a docker process incorrectly writing files in its local file system instead of forwarding them to stdout.
# df on / it shows 347G used and only 44 G available df -h Filesystem Size Used Avail Use% Mounted on udev 32G 0 32G 0% /dev tmpfs 6.3G 640M 5.7G 10% /run /dev/md2 438G 372G 44G 90% / tmpfs 32G 3.4M 32G 1% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 32G 0 32G 0% /sys/fs/cgroup /dev/md1 488M 486M 0 100% /boot tmpfs 6.3G 0 6.3G 0% /run/user/1001
As visible from the snippet above the /dev/md2 partition has 372G of space in use.
While investigating with du
we realised that it was a container taking a lot of space and thus writing a massive file of about 330G under `/var/lib/docker/containers/fileID.log` .
Although we removed the offending container and its whole stack from the Rancher deployment we realised the file was not being released and the disk space still allocated even if the file was deleted already. As a result we looked for deleted files that were not released yet. In order to look for it you can use lsof
or in our case as it was not installed just use find/ls
.
Indeed you can ls /proc/*/fd
as such
sudo ls -lU /proc/*/fd | grep deleted lr-x------ 1 root root 64 Oct 13 2018 42 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log (deleted) lr-x------ 1 root root 64 Oct 14 2018 123 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log (deleted) lr-x------ 1 root root 64 Oct 15 2018 139 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log (deleted) lr-x------ 1 root root 64 Apr 18 06:00 146 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log (deleted)
and see files marked as deleted but not released yet.
Then you have a few options, you can try to stop/restart the process blocking the file or as in our case, as this might cause a downtime to our client, we would just overwrite the file with empty content. If the file was still present it could be deleted with
: > /path/to/the/file.log
but as the file was yes deleted but still locked up by a process then we can overwrite it by looking up the process ID and the file descriptor and run the overwrite as shown here
: > "/proc/$pid/fd/$fd"
or
sudo sh -c ': > /proc/1233/fd/146'
if you experience permission problems in bash.
To find the process id and the file descriptor you can run
sudo find /proc/*/fd -ls | grep deleted | grep docker
17288 0 lr-x------ 1 root root 64 Oct 13 2018 /proc/1233/fd/42 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log\ (deleted)
1106161 0 lr-x------ 1 root root 64 Oct 14 2018 /proc/1233/fd/123 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log\ (deleted)
4993234 0 lr-x------ 1 root root 64 Oct 15 2018 /proc/1233/fd/139 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log\ (deleted)
1659260673 0 lr-x------ 1 root root 64 Apr 18 06:00 /proc/1233/fd/146 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log\ (deleted)
and as you can see in bold above, the $pid and the $fd values will be visible in this breakdown.
Once you overwrite the content then your filesystem will finally be freed of some extra space.
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 32G 0 32G 0% /dev
tmpfs 6.3G 680M 5.7G 11% /run
/dev/md2 438G 38G 377G 10% /
tmpfs 32G 3.4M 32G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/md1 488M 486M 0 100% /boot
tmpfs 6.3G 0 6.3G 0% /run/user/1001
I hope you like this article, please feel free to share it online or contact us if you have any questions.
Major internet service provider causing extensive business damages to our business due to their unreasonable and hidden service policies.
As a small UK business we run online international software with several users signing up to our service from GMail or from email powered by GSuite, the Google email service provided to run email on their server on behalf of your website/business.
GMail is a great service but as a business we have discovered that they have an internal classification on how handle SPAM that goes as far as rejecting emails. An example of their rejection bounce message response shows
<d**************o@gmail.com>: host gmail-smtp-in.l.google.com[74.125.142.26] said: 550-5.7.1 [xxxxxxxxxxxxx] Our system has detected that this message is 550-5.7.1 likely suspicious due to the very low reputation of the sending 550-5.7.1 domain. To best protect our users from spam, the message has been 550-5.7.1 blocked. Please visit 550 5.7.1 https://support.google.com/mail/answer/188131 for more information. m14si485512pgs.39 – gsmtp (in reply to end of DATA command)
We have now experienced this disservice since the 29th of January 2020 and all our emails are rejected. This is despite following all their guidelines and their suggestion.
The situation is so bad that Google/Gmail users send emails to us and despite their interaction we are still unable to reply back because Gmail blocks all emails.
Below are a few facts/comments on our experience with GMail/GSuite while trying to troubleshoot the problem:
See our support thread on the Google forum with all the details exchanges with product experts and other users confirming all the steps we have taken into trying to troubleshoot the problem and how all the suggestions from them do not work https://support.google.com/mail/thread/27427166 .
If you are also experiencing this problem, managed to solve or you just want to contact us with more information regarding the problem please contact us.
When you are hosting private code, for instance for a reusable component, on Gitlab you will not be able to clone it unless you have access or are authenticated with the GitLab backend and are authorised to the repository.
Access tokens are a great way to allow an alternative way to clone or add your project as a dependency to a parent project. For example you can use tokens as part of your Continuous Integration pipeline to build, test and deploy your project.
If you add a project in your composer.json ( composer is the de facto package manager most used in PHP) , such as
composer.phar require "bizmate/my-private-package:2.*"
You will see something like
Could not fetch https://gitlab.com/api/v4/projects/bizmate%2Fmy-private-package/repository/archive.zip?sha=..., enter your gitlab.com credentials to access private repos A token will be created and stored in "/home/composer/.composer/auth.json", your password will never be stored To revoke access to this token you can visit gitlab.com/profile/applications Username: Password: Bad credentials. You can also manually create a personal token at https://gitlab.com/profile/personal_access_tokens Add it using "composer config --global --auth gitlab-token.gitlab.com <token>"
Using username and password credentials is not a great approach as they are critical information and also because if you have 2FA enabled it might not work. So I created a personal access token and I tried adding it through the composer command but this did not work somehow. This might be due to the complexity also of running a development environment in docker but indeed I still found a simpler way to get around this.
Just add your access token to your git configuration, as per command below.
git config --global gitlab.accesstoken {TOKEN_VALUE}
The above will work straight away and you will be able to add your project in composer straight away without adding any information in its auth.json file.
$ phpbrew install 7.2 +everything -dtrace -pgsql -tidy
Simple Email Service (SES) is one of the great AWS services for email
As per original instructions, credentials for this service can be generated from the console or you could generate it from your user credentials, AWS Secret Access Key.
RabbitMQ is a great option for the implementation of an AMQP (Advanced Message Queuing Protocol) queuing system.
Why would you need a queuing system? So that you can offload heavy tasks to be processed in a separate process asynchronously and avoid blocking your client, on your website for instance.
Guzzle is a great wrapper to run Curl requests from your PHP applications
As part of my development requirements for MyReviews.link, I had to implement a fast concurrent way to perform http requests to several servers.
This is a drive through the requirement to install an AWS S3 Bucket on a Virtual Machine, on your PC and provisioned by VirtualBox, with an FTP service that can make use of the AWS S3 Bucket for persistent distributed storage. The reason to have a S3 bucket is to allow the use of a single persistent storage for several services (for instance multiple FTP servers) running for different clients and exposed by a dedicated virtual machine.
Please note the below instructions are for Windows only
You might want to enable access to your Windows laptop or to the internet through a Wifi network.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.