• info@bizmate.biz

News

ubuntu kernel updates and unstable nvidia drivers

Notice this page is an informal log of some of the debugging and troubleshooting for Nvidia drivers and Kernel updates problems. Nvidia provides Linux/Ubuntu drivers out of the box. Installing them is really easy such as running

`sudo ubuntu-drivers install` .

Quite often though these drivers might not be compatible or still have problems so here we go with some troubleshooting.

Ubuntu drivers not fully installed.

Sometimes despite ubuntu-drivers install suggesting the drivers are indeed already installed we might be in an unstable situation where a new kernel has been installed but when drivers are updated the kernel relative modules are not. As such we could run

`sudo dkms autoninstall && reboot`

 

Check if driver is indeed installed

Running nvidia-smi gives you an output of the currently installed drivers. If this application is not installed then the proprietary drivers are missing

$ nvidia-smi
Thu Dec 7 11:28:46 2023 
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.129.03 Driver Version: 535.129.03 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 2070 Off | 00000000:09:00.0 On | N/A |
| 0% 49C P8 22W / 175W | 1142MiB / 8192MiB | 1% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 6270 G /usr/lib/xorg/Xorg 497MiB |
| 0 N/A N/A 6433 G /usr/bin/gnome-shell 105MiB |
| 0 N/A N/A 7347 G /usr/bin/nextcloud 1MiB |
| 0 N/A N/A 7519 G cairo-dock 5MiB |
| 0 N/A N/A 7691 C+G ...83750398,4784756152597305294,262144 276MiB |
| 0 N/A N/A 26615 G ...zmate/jcef_26128.log --shared-files 13MiB |
| 0 N/A N/A 28209 G ...,WinRetrieveSuggestionsOnlyOnDemand 86MiB |
| 0 N/A N/A 40478 G /usr/lib/thunderbird/thunderbird 150MiB |
+---------------------------------------------------------------------------------------+

Troubleshooting errors by checking system messages

Very often boot up or start up errors are also recorded and they could help explaining why there is an error or conflict causing your Nvidia card from not working correctly, for instance displaying/detecting only one of the two monitors plugged in with your card. Journalctl is a great tool to check these errors. To debug from this messages buffer you can

journalctl -kb | less

In many cases you can see errors like

[   5.004707] nvidia-gpu 0000:05:00.3: i2c timeout error e0000000

You can then search for an error. Quite often some errors are indeed conflicts with other boot/start up processes. As such these modules can be blacklisted by adding modprobe specific entries such as blacklisting the ic2_nvidia_gpu if it errors

echo "blacklist i2c_nvidia_gpu" > /etc/modprobe.d/blacklist_i2c-nvidia-gpu.conf

Change Kernels

In some cases you might have an old or new kernel and the nvidia driver might be not fully compatible with this kernel. You can use a tooling called mainline, that installed on linux/ubuntu allows to install and set as the main kernel another one other than the one currently installed.

Checking and rolling back migrations with doctrine.

This is a quick note on how to work with doctrine migrations

When a change in the mapping is done you can run

I have no name!@7efd2339724b:/var/www/html$ bin/console doctrine:schema:validate

Mapping
-------


[OK] The mapping files are correct. 


Database
--------


[ERROR] The database schema is not in sync with the current mapping file.

This will show if the mapping files are correct and if indeed the mapping matches the current schema status.

If the status is not in sync then you might need to update, run or generate a migration.

To generate a migration run…

I have no name!@7efd2339724b:/var/www/html$ bin/console doctrine:migrations:diff 
Generated new migration class to "/var/www/html/src/Infrastructure/Persistence/Doctrine/Migrations/Version20231127094706.php"

This new migration will have the new definitions for the schema to upgrate the state to the current state. It is generally a good practice to review it.

However if you want to rollback and run a specific migration this can be done by first checking the migrations status …

I have no name!@7efd2339724b:/var/www/html$ bin/console doctrine:migrations:status
+----------------------+------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------+
| Configuration |
+----------------------+------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------+
| Storage | Type | Doctrine\Migrations\Metadata\Storage\TableMetadataStorageConfiguration |
| | Table Name | doctrine_migration_versions |
| | Column Name | version |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Database | Driver | Symfony\Bridge\Doctrine\Middleware\Debug\Driver |
| | Name | devdb |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Versions | Previous | 0 |
| | Current | MyApp\YelpBundle\Infrastructure\Persistence\Doctrine\Migrations\Version20230605164531 |
| | Next | MyRApp\ReviewsConfigBundle\Infrastructure\Persistence\Doctrine\Migrations\Version20230118085703 |
| | Latest | MyApp\YelpBundle\Infrastructure\Persistence\Doctrine\Migrations\Version20230605164531 |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Migrations | Executed | 1 |
| | Executed Unavailable | 0 |
| | Available | 2 |
| | New | 1 |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Migration Namespaces | MyApp\ReviewsConfigBundle\Infrastructure\Persistence\Doctrine\Migrations | /var/www/html/src/Infrastructure/Persistence/Doctrine/Migrations |
| | MyApp\YelpBundle\Infrastructure\Persistence\Doctrine\Migrations | /var/www/html/vendor/bizmate/myreviews-yelp-reviews/src/Infrastructure/Persistence/Doctrine/Migrations |

Then you can rollback the migrations by notice in the command line you will need to escape the backward slashes or the migration will not be identified.

I have no name!@7efd2339724b:/var/www/html$ bin/console doctrine:migrations:execute MyApp\\ReviewsConfigBundle\\Infrastructure\\Persistence\\Doctrine\\Migrations\\Version20230118085703 --down

WARNING! You are about to execute a migration in database "devdb" that could result in schema changes and data loss. Are you sure you wish to continue? (yes/no) [yes]:
> yes

[notice] Executing MyApp\ReviewsConfigBundle\Infrastructure\Persistence\Doctrine\Migrations\Version20230118085703 down
[notice] finished in 46.2ms, used 18M memory, 1 migrations executed, 23 sql queries

[OK] Successfully migrated version(s): 
MyApp\ReviewsConfigBundle\Infrastructure\Persistence\Doctrine\Migrations\Version20230118085703: [DOWN]

NOTICE if multiple bundles are registered, to keep track of the other bundles migrations status.

 

Give meaningful names to your Natwest bank statements using bash and phptotext

Why are certain banks so bad when it comes to naming your e-statements from your accounts?

Why when you download a statement from your account you get a PDF names like “b5862f22-c86f-4195-a756-8ea2e009da85.pdf“.

Would it help if the file had a more meaningful name such as “MYACCOUNT-NAME_ACCOUNT-NUMBER_YYYYMMDD.pdf” where you know from the name exactly what account this belongs to and a date that the file is related too?

Try a bash script to rename all files after you download them

Well Natwest in the UK does not care about fixing this small but simple problem and so here comes a small script written in bash and using pdftotxt to find the end date of the statement and give a meaningful name to your Natwest statements.

Please note that the script is not given with any sort of warranty so please the script description and the license file in the repository.

By using the script you accept the disclaimer.

I hope this script helps.

linux bash file rename with dates formatted as DD-MM-YYYY to YYYYMMDD

I am often, as I assume several other people, served with files from online services that have rather strange or incomplete formats and information on their name.

Especially for bank accounts I like an element of semantic description, such as “CreditCard”, “{bankName}_statement”, etc. followed by a well formatted date such as YYYYMMDD, standing for year, month and day such as 20230228.

This date formatting helps because despite the file creation time it still allows ordering the files by name and having the year and month before the day will show the files ordered according to this part of the name. For example a file named “CreditCard_30032021.pdf” will show after a file called “CreditCard_29032022.pdf” if ordering by name, even if older.

See example of a run here:

Bash Essentials File Rename with script available on Github

 

As a solution for this specific problem i just created a bash script, supposed to work with bash 4.4 and after, so it should work in any modern linux distribution.

The script is available on github but in show it is just a function that splits the initial DD-MM-YYYY format into day, month and year. It then removes the old date format from the file name and replaces it with YYYYMMDD.

Please give it a try first without confirming any changes/renaming and if you have any questions or comments feel free to contact us.

AWS S3 Cli quick reference

Dreamhosts these days forces use of DreamObjects. This is just a custom wrapper of S3 and so it can be used in a very similar way as you would use the original. Simplified steps

Set up credentials

$ export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
$ export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
$ export AWS_DEFAULT_REGION=us-west-2

List content in the bucket (get the bucket from the Dreamhost panel if needs DreamObjects)

aws –endpoint-url https://objects-us-east-1.dream.io s3 ls

Download whole directory

aws –endpoint-url https://objects-us-east-1.dream.io s3 cp s3://trippinrecords-20220427-dh-data-backup/db_backup . –recursive

Delete directory

aws –endpoint-url https://objects-us-east-1.dream.io s3 rm s3://trippinrecords-20220427-dh-data-backup/ –recursive

LetsEncrypt ACMEv2 SSL certificate provisioning in Rancher 1.6 – legacy

Preface

Rancher 1.6 allows the use of the old/legacy Cattle engine and if you are still using this old version of Rancher you should probably consider upgrading or using another orchestrator if you do not want to use Kubernetes as Rancher does not support Cattle or Docker Swarm alike simpler solutions.

LetsEncrypt on Rancher 1.6 using ACME v2

Provisioning SSL certs on Rancher 1.6 and previous was possible by deploying a service with image janeczku/rancher-letsencrypt:v0.5.0 and provide the correct configuration.

This solution was using the LetsEncrypt ACME v1 API and as this API is not deprecated you will see errors such as

31/07/2021 16:28:42time="2021-07-31T15:28:42Z" level=info msg="Starting Let's Encrypt Certificate Manager v0.5.0 0913231"
31/07/2021 16:28:42time="2021-07-31T15:28:42Z" level=fatal msg="LetsEncrypt client: Could not create client: get directory at 'https://acme-v01.api.letsencrypt.org/directory': acme: Error 403 - urn:acme:error:serverInternal - ACMEv1 is deprecated and you can no longer get certificates from this endpoint. Please use the ACMEv2 endpoint, you may need to update your ACME client software to do so. Visit https://community.letsencrypt.org/t/end-of-life-plan-for-acmev1/88430/27 for more information."

.

There is an alternative to this solution by just using a different image that is implementing the ACMEv2 integration with LetEncrypt.

To deploy the new solution use these steps or follow the description below:

  1. Create a new service (not from the catalogue), give it a name such as MyDomainLetsEncrypt. and use the image vxcontrol/rancher-letsencrypt:v1.0.0
  2. Add these volumes making sure the volumes used for the certificate is the one used to indeed to load the certificates in Rancher
    1. /var/lib/rancher:/var/lib/rancher
    2. MyCertificatesNamedVolume:/etc/letsencrypt – notice that the name of the volume depends on your current setup.
  3. In the “Command” tab, set the “Console” option to none
  4. In the “Labels” tab, create the following 2 labels:
    io.rancher.container.agent.role=environment
    io.rancher.container.create_agent=true
  5. Copy the environment variables template below, add your domain and email values. Then click to add an Environment Variable, and paste the whole block into the first “Variable” input field. All the environment variables will be added
    API_VERSION=Production
    AURORA_ENDPOINT=
    AURORA_KEY=
    AURORA_USER_ID=
    AWS_ACCESS_KEY=
    AWS_SECRET_KEY=
    AZURE_CLIENT_ID=
    AZURE_CLIENT_SECRET=
    AZURE_RESOURCE_GROUP=
    AZURE_SUBSCRIPTION_ID=
    AZURE_TENANT_ID=
    CERT_NAME=**ENTER DOMAIN NAME**
    CLOUDFLARE_EMAIL=
    CLOUDFLARE_KEY=
    DNSIMPLE_EMAIL=
    DNSIMPLE_KEY=
    DNS_RESOLVERS=8.8.8.8:53,8.8.4.4:53
    DOMAINS=**ENTER DOMAIN NAME**
    DO_ACCESS_TOKEN=
    DYN_CUSTOMER_NAME=
    DYN_PASSWORD=
    DYN_USER_NAME=
    EMAIL=**ENTER YOUR EMAIL**
    EULA=Yes
    GANDI_API_KEY=
    NS1_API_KEY=
    OVH_APPLICATION_KEY=
    OVH_APPLICATION_SECRET=
    OVH_CONSUMER_KEY=
    PROVIDER=HTTP
    PUBLIC_KEY_TYPE=RSA-2048
    RENEWAL_PERIOD_DAYS=20
    RENEWAL_TIME=12
    RUN_ONCE=false
    VULTR_API_KEY=
  6. Create the service, once the service is created and running correctly you will see it is producing logs such as
    31/07/2021 17:31:08time="2021-07-31T16:31:08Z" level=info msg="Starting Let's Encrypt Certificate Manager v1.0.0 eb89fad"
    31/07/2021 17:31:08time="2021-07-31T16:31:08Z" level=info msg="Generating private key (2048) for myemail@mydomain.com."
    31/07/2021 17:31:09time="2021-07-31T16:31:09Z" level=info msg="Creating Let's Encrypt account for myemail@mydomain.com"
    31/07/2021 17:31:10time="2021-07-31T16:31:10Z" level=info msg="Using Let's Encrypt Production API"
  7. You can now map the load balancer to redirect calls to port 80 , yourdomain.com and PATH /.well-known/acme-challenge to the service you created above so that it can indeed handle the SSL certificate generation.
  8. Once the certificate is generated do map it on port 443 of your domain load balancer service for the same domain.

I hope the above is helpful but do feel free to contact us if you have any feedback or questions.

 

 

Optimise images for web publishing, improve your user experience and Google pagespeed rating.

Your website speed can hugely be affected if you are not taking care of optimising your web images. Optimisation suggestions are provided if you run a test on pagespeed, the google tool to asses your website speed, usability and more.

Running a pagespeed test is as easy as visiting the page and entering your website address. Also lightspeed can be integrated in your CI pipeline if you have the technical and knowledge resources to run these test, but I will write a post about lightspeed integration in another article.

When testing on your site you will get some suggestions on what images size might be if optimised. Also pagespeed provides other suggestions about lazy loading off-screen images but this article will focus on the the use of the convert command to generate optimised images.

As show on https://developers.google.com/speed/docs/insights/OptimizeImages convert can be used on your machine for instance to optimise JPGs as such

convert puzzle.jpg -sampling-factor 4:2:0 -strip -quality 85 -interlace JPEG -colorspace sRGB puzzle_optimised.jpg

To avoid manually running the command with each single photo I have published a small script that uses convert to optimise jpg images present in your current folder.

To run the optimisation you can

  1. git clone the repository `git clone git@github.com:bizmate/bash-essentials.git ~/bash-essentials`
  2. cd the folder with your last photoshoot or products folder
  3. run `~/bash-essentials/bin/web-image-optimiser.sh` <- notice you can place the bash script anywhere in your machine as long as you call it when you are inside the images folder

The result will yield new images with the _optimised suffix in their name.

An example of an optimised vs original image

Sicilian flavour original flour image

Sicilian flavour original flour image 41 Kb

Original link to non optimised image and without our website optimisation -> http://www.bizmate.biz/wp-content/uploads/2020/10/IMG_1291_800px.jpg

Sicilian flavour optimised for web flour image

Sicilian flavour optimised for web flour image is only 21 kB compared to the 41kB of the original image.

Original link to optimised image and without our website optimisation -> http://www.bizmate.biz/wp-content/uploads/2020/10/IMG_1291_800px_optimised.jpg

As you can see above this optimisation can make a massive difference. We have seen size reduction of at least 50% in our images and although it might be lower at times it is still a great optimisation.

And can you see any difference between the images?

Clear up space/delete file being held up by a process in linux

Recently I had to troubleshoot a disk space problem, hard disk full on our client server, caused by a docker process incorrectly writing files in its local file system instead of forwarding them to stdout.

# df on / it shows 347G used and only 44 G available
df -h
Filesystem      Size  Used Avail Use% Mounted on
udev             32G     0   32G   0% /dev
tmpfs           6.3G  640M  5.7G  10% /run
/dev/md2        438G  372G   44G  90% /
tmpfs            32G  3.4M   32G   1% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs            32G     0   32G   0% /sys/fs/cgroup
/dev/md1        488M  486M     0 100% /boot
tmpfs           6.3G     0  6.3G   0% /run/user/1001

As visible from the snippet above the /dev/md2 partition has 372G of space in use.

While investigating with du we realised that it was a container taking a lot of space and thus writing a massive file of about 330G under `/var/lib/docker/containers/fileID.log`  .

Although we removed the offending container and its whole stack from the Rancher deployment we realised the file was not being released and the disk space still allocated even if the file was deleted already. As a result we looked for deleted files that were not released yet. In order to look for it you can use lsof or in our case as it was not installed just use find/ls .

Indeed you can ls /proc/*/fd as such

sudo ls -lU /proc/*/fd | grep deleted 
lr-x------ 1 root root 64 Oct 13 2018 42 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log (deleted)
lr-x------ 1 root root 64 Oct 14 2018 123 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log (deleted)
lr-x------ 1 root root 64 Oct 15 2018 139 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log (deleted)
lr-x------ 1 root root 64 Apr 18 06:00 146 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log (deleted)

and see files marked as deleted but not released yet.

Then you have a few options, you can try to stop/restart the process blocking the file or as in our case, as this might cause a downtime to our client, we would just overwrite the file with empty content. If the file was still present it could be deleted with

: > /path/to/the/file.log

but as the file was yes deleted but still locked up by a process then we can overwrite it by looking up the process ID and the file descriptor and run the overwrite as shown here

: > "/proc/$pid/fd/$fd"

or

sudo sh -c ': > /proc/1233/fd/146'

if you experience permission problems in bash.

To find the process id and the file descriptor you can run

sudo find /proc/*/fd -ls | grep  deleted | grep docker
17288      0 lr-x------   1 root       root             64 Oct 13  2018 /proc/1233/fd/42 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log\ (deleted)
1106161      0 lr-x------   1 root       root             64 Oct 14  2018 /proc/1233/fd/123 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log\ (deleted)
4993234      0 lr-x------   1 root       root             64 Oct 15  2018 /proc/1233/fd/139 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log\ (deleted)
1659260673      0 lr-x------   1 root       root             64 Apr 18 06:00 /proc/1233/fd/146 -> /var/lib/docker/containers/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24/15ef8edcf7dcef2ea696fdef79e8b22150789227c86ec856570a49f086300e24-json.log\ (deleted)

and as you can see in bold above, the $pid and the $fd values will be visible in this breakdown.

Once you overwrite the content then your filesystem will finally be freed of some extra space.

$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 32G 0 32G 0% /dev
tmpfs 6.3G 680M 5.7G 11% /run
/dev/md2 438G 38G 377G 10% /
tmpfs 32G 3.4M 32G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/md1 488M 486M 0 100% /boot
tmpfs 6.3G 0 6.3G 0% /run/user/1001

I hope you like this article, please feel free to share it online or contact us if you have any questions.

Gmail/GSuite and their hidden domain reputation factor, causing email connectivity damages to small businesses.

Major internet service provider causing extensive business damages to our business due to their unreasonable and hidden service policies.

As a small UK business we run online international software with several users signing up to our service from GMail or from email powered by GSuite, the Google email service provided to run email on their server on behalf of your website/business.

GMail is a great service but as a business we have discovered that they have an internal classification on how handle SPAM that goes as far as rejecting emails. An example of their rejection bounce message response shows

<d**************o@gmail.com>: host gmail-smtp-in.l.google.com[74.125.142.26] said: 550-5.7.1 [xxxxxxxxxxxxx] Our system has detected that this message is 550-5.7.1 likely suspicious due to the very low reputation of the sending 550-5.7.1 domain. To best protect our users from spam, the message has been 550-5.7.1 blocked. Please visit 550 5.7.1 https://support.google.com/mail/answer/188131 for more information. m14si485512pgs.39 – gsmtp (in reply to end of DATA command)

We have now experienced this disservice since the 29th of January 2020 and all our emails are rejected. This is despite following all their guidelines and their suggestion.

The situation is so bad that Google/Gmail users send emails to us and despite their interaction we are still unable to reply back because Gmail blocks all emails.

Below are a few facts/comments on our experience with GMail/GSuite while trying to troubleshoot the problem:

  1. Blocking emails is a really drastic measure and also causes damage to a business. Google tools and suggestion do not provide any clear evidence as to why emails are being blocked.
  2. Google tools suggested to troubleshoot the problem do not show any data why this the rejection/ban is in place
  3. Google does not provide any direct support. Instead they use informal indirect forums where product experts (likely to be Google employees) respond in a generic manner and do not provide any real resolution despite months of trying to troubleshoot this.
  4. Google knows their tools for monitoring traffic and help in the resolution of problems to not show any data, they do nothing to fix it so that the data is visible and show no other alternative on how to fix the problem. See an example of the several articles on their forum showing complaints about how “Postmaster” has no data https://support.google.com/mail/thread/4100957?hl=en . We do check on a daily basis and see no data at all but they still provide this as one of their tools to monitor email deliverability to their systems.
  5. Gmail/GSuite operate also as a premium provider, meaning that they also charge users to run emails on their servers and they are a major player in the market. Given this position can anyone suggest if their unreasonable blocking policy can be seen as an abusive position?
  6. Google product experts suggested to go through Troubleshooting for senders with email delivery issues and indeed we have none of the issues described in this form. The form leads to another form described in the next bullet point
  7. The Sender Contact Form is supposed to be a way to request that the ban/rejection is lifted. We have sent countless requests to this form with very detailed examples of how their service is bouncing our emails and still see the emails being rejected.

See our support thread on the Google forum with all the details exchanges with product experts and other users confirming all the steps we have taken into trying to troubleshoot the problem and how all the suggestions from them do not work https://support.google.com/mail/thread/27427166 .

If you are also experiencing this problem, managed to solve or you just want to contact us with more information regarding the problem please contact us.

GitLab Clone Private Repository with Access Token – Composer

When you are hosting private code, for instance for a reusable component, on Gitlab you will not be able to clone it unless you have access or are authenticated with the GitLab backend and are authorised to the repository.

Access tokens are a great way to allow an alternative way to clone or add your project as a dependency to a parent project. For example you can use tokens as part of your Continuous Integration pipeline to build, test and deploy your project.

If you add a project in your composer.json ( composer is the de facto package manager most used in PHP) , such as

composer.phar require "bizmate/my-private-package:2.*"

You will see something like

Could not fetch https://gitlab.com/api/v4/projects/bizmate%2Fmy-private-package/repository/archive.zip?sha=..., enter your gitlab.com credentials to access private repos
A token will be created and stored in "/home/composer/.composer/auth.json", your password will never be stored
To revoke access to this token you can visit gitlab.com/profile/applications
Username: 
Password: 
Bad credentials.
You can also manually create a personal token at https://gitlab.com/profile/personal_access_tokens
Add it using "composer config --global --auth gitlab-token.gitlab.com <token>"

Using username and password credentials is not a great approach as they are critical information and also because if you have 2FA enabled it might not work.

The only stable solution is to create a Personal Access Token and set it up in the Composer configuration so that composer can build the correct links to clone git repositories from GitLab by adding the right token to the URL.

Inspect your configuration within composer with command

$ composer config --list

#or

$ composer config --global --list

to fetch the global configuration. To set up the right configuration you can run

composer config --global gitlab-token.gitlab.com a______________________a

again the –global option is present to set it up in the system wide configuration of your PC.