CI/CD Software Development Pipeline

CI/CD Pipeline & Toolchain

Continuous Integration / Continuous Delivery is the development lifecycle trend is the agile approach for software development

Users

  1. Setup passwordless user
    $useradd --disabled-password user-name
    $passwd -d user-name
    
  2. Install dev tools

GitHub

SSH Keys

Keys must be unique (each git account have own unique key and unique developer, key naming id_<developer-name>_<git-account>) ex. id_timyshark, id_edudater

setup github SSH

  1. on WSL Workstation; generate local ssh key for each account:
    ssh-keygen -t ed25519 -C "[email protected]" -> will generate ~/.ssh/<id_account>, <id_account>.pub
  2. Backup id_account files on OneDrive
    C:\Users\timys\OneDrive\VMs\CloudAccess\key-pair\github
  3. (Optional) Add private keys to the ssh agent
    $eval "$(ssh-agent -s)" <-ADD to .bash_profile$ssh-add ~/.ssh/id_account <- private key
  4. On Github.com, Load public key to github account : settings->SSH and GCP keys -> add SSH Key (copy & paste id_account.pub content)
  5. Add entries ~/.ssh/config
    Host  <alias-name> := github-<project>HostName github.com
    IdentityFile ~/.ssh/<id-account>  <- one entry for each key file
    IdentitiesOnly yes
  6. Test: $ssh -T git@<alias-name> <- Hi <account>! You’ve successfully authenticated, but GitHub does not provide shell access.
  7. Clone Repository $git clone git@github-<account>:<organization>/repo.git
  8. Change remote on the repo to use new ssh protocol
    $git remote set-url <remote-name> <ssh-protocol> <- use the alias not "github.com" ex. github-edudate

GPG Gnu Privacy Guard: https://docs.github.com/en/authentication/managing-commit-signature-verification/about-commit-signature-verification

 

  1. List keys : gpg –list-secret-keys –keyid-format=long
  2. If exist, export : gpg –armor –export 3AA5C34371567BD2 <- this is the sec of key from step #1
  3. Add it to Git hub GPG settings->SSH and GPG (copy and paste)
  4. Or Generate new one: gpg –full-generate-key <- min 4096bits; go to step 1 above
  5. Configure git: git config –global user.signingkey 3AA5C34371567BD2
  6. To add GPG at startup : [ -f ~/.bashrc ] && echo ‘export GPG_TTY=$(tty)’ >> ~/.bashrc

Note: email must be verified, verify in Emails sections under settings

  1. To modify id: gpg –edit-key 3AA5C34371567BD2; gpg>adduid;edit;gpg>save; go to step #2

 

To sign commit: git commit -S -m “commit msg”

To sign tags : git tag -s <tag>

 

Local Repository git user.email must match the signature user email; use git config user.email=”…”

 

So , for each repository, make sure

user.name = <ops name> ex. Edudater, instructor,..etc

user.email = “[email protected]” <- must be in the emails list of Github user, verified, not visible.

Git on Docker Images:

cp -R ~/.ssh /var/projects/edudate-share/.ssh

Then on docker image

mv /var/share/.ssh ~/.ssh

 

https://docs.docker.com/engine/reference/commandline/cp/

docker cp <src-path> <container_id>:<dest-path>

<src-path>:= file foo.txt or directory /path/foo/.

<dst-path>:= file bar.txt or directory /path/bar/.

In this case:

docker cp ~/.ssh <containter-id>:/home/edudate

 

 

Or copy in the docker build file (use only if docker image is in private repo)

Naming conventions:

All keys are id_account ex. id_edudate id_you-me

Except for hahlabs, we have two different key format,

For a2hosting : id_hahlabs[.pub]

For Github : id_hahlabs-github

 

Aliases are for each github account, ex Host github-<account> ex, github-edudate

SSH Configuration

Configuration file: $HOME/.ssh/config

Aliases for hosts when using ssh command,

public keys are uploaded on the servers

SSH Configuration

Configuration file: $HOME/.ssh/config

Aliases for hosts when using ssh command,

public keys are uploaded on the servers

 

Host hahlabs <– host-alias
   HostName hahlabs.com <- hostname or ip
   IdentityFile ~/.ssh/id_hahlabs <- private key file
   User hahlabs <- user name
   Port 7822 <- port
   IdentitiesOnly yes

 

usage:

ssh -p 7822 -i ~/.ssh/id_hahlabs [email protected]

ssh hahlabs

 

git clone git@host-alias:<organization>/<repository>.git <folder>

git clone git@hahlabs:edudateacademy/edudate.academy.git

SSH key-based authentication

Reference: Digital Ocean SSH Key-based Authentication

  1. Generate ssh key pair using the ssh-keygen tool
  2. Append the public key to destination ssh server ~/.ssh/authorized_keys ssh-copy-id
    $ssh-copy-id <user>@<host> <-- tries every key available, then install it on the host ~/.ssh/authorized_keys file
  3. Disable password authonatication on the ssh server edit /etc/ssh/sshd_config set PasswordAuthentication no
    1. Install SSH Client and VSCode on Development Server Ex. Windows 11 or Linux. (ex. EC2 instance Linux Turnado)
    2. Install remote – ssh extension on Development VSCode client
    3. Configure host connection
      1. in VSCode , F1, Ctrl+Shift+P => Remote-SSH: Open Configuration File…
      2. Select & Edit & Save C:\Users\<user>\.ssh\config
        Host turnado
        HostName 34.218.74.207 <-- Public IP of Turnado
        IdentityFile ~/.ssh/turnado_key.pem <-- in windows /%USERPROFILE%/.ssh folder
        User edudater
        Port 22
        IdentitiesOnly yes
      3. in VSCode, F1, Ctrl+Shift+P:= Remote-SSH: Connect to Host… [turnado] <- [SSH: turnado] shows at the bottom left of VSCode
https://github.com/nvm-sh/nvm
#dnf install npm
#npm install -g npm
$curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.0/install.sh | bash

# Adds the following to ~/.bashrc
export NVM_DIR="$([ -z "${XDG_CONFIG_HOME-}" ] && printf %s "${HOME}/.nvm" || printf %s "${XDG_CONFIG_HOME}/nvm")"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm

$source ~/.bashrc

PHP Baseline

#apt -y update  #apt install -y nginx apt-utils software-properties-common 
#add-apt-repository -y ppa:ondrej/php 
#add-apt-repository -y ppa:ondrej/nginx
#apt -y update
#apt install -y zip unzip php8.2 php8.2-mysql php8.2-intl php8.2-curl php8.2-mbstring php8.2-xml php8.2-zip  php8.2-gd php8.2-bz2 php8.2-redis php8.2-memcached php8.2-fpm
#usermod -aG www-data hahlabs
#apt remove -y software-properties-common apt-utils

Composer

#cd /home/hahlabs/files
#apt install -y curl
#curl -sS https://getcomposer.org/installer -o composer-setup.php
#php composer-setup.php --install-dir=/usr/local/bin --filename=composer

Google GRPC

#apt install -y autoconf zlib1g-dev php8.2-dev php-pear 
#pecl channel-update pecl.php.net
#pecl install grpc protobuf <-- takes very long time
#cp files/lib/20-grpc.ini /etc/php/8.2/cli/conf.d
#cp files/lib/20-grpc.ini /etc/php/8.2/fpm/conf.d

Check installation

hahlabs@76a938d5b91d:~$ php -i | grep "PHP Extension"
PHP Extension => 20220829
PHP Extension Build => API20220829,NTS
hahlabs@76a938d5b91d:~$ php -i | grep extension_dir
extension_dir => /usr/lib/php/20220829 => /usr/lib/php/20220829 <- All *.so here
hahlabs@76a938d5b91d:~$ php --ini
Configuration File (php.ini) Path: /etc/php/8.2/cli
Loaded Configuration File:         /etc/php/8.2/cli/php.ini
Scan for additional .ini files in: /etc/php/8.2/cli/conf.d
Additional .ini files parsed:      /etc/php/8.2/cli/conf.d/10-mysqlnd.ini,
           :
/etc/php/8.2/cli/conf.d/20-grpc.ini,
           :

20-grpc.ini

[GRPC]
extension=grpc
extension=protobuf

WSL 2

Install

https://github.com/microsoft/WSL/releases

Export / Import

PS>wsl -l  #list images
PS>wsl --export <image> <file.tar> <<-- use Ubuntu as template image
PS>wsl --import <new-image> <target directory> <file-name.tar>

Folder Structure : <WSL-path>/<project-name>/<image-name> ex. E:/VMs/WSL/EDUDATE.ACADEMY/edudate-ops

ON LEOPARD:

Images archived on: D:\VMs\WSL\backup <- synced on GoogleDrive

WSL disks stored locally on: D:\WSLs\<project-name>

Add user

Note , WSL command is adduser not useradd (typical ubuntu)

# adduser <new-user-name>
#addgroup <new-group-name>

Change default uid

Login as root

PS> wsl -d <image> -u root

((If taken from starfish-snapshot backups))

#./change-wsl-owner <new-user>

change-wsl-owner is script that automate below manual ops

Manual

#vi /etc/wsl.conf
[user]
default=<new-user>
#groupmod -n <new-group> <old-group>
#usermod -l <new-user> -d /home/<new-user> <old-user>
ex. usermod -l edudateops -d /home/edudateops ubuntu
#mv /home/<old-user> /home/<new-user>
#exit
PS>wsl --shutdown
PS>wsl -d <new-wsl-distribution>
$echo "cd ~" >> .bashrc
$sudo apt update
$sudo apt upgrade -y
$git config --global user.name "Edudater"
$git config --global user.email "[email protected]"

Upgrade 20.04 to 22.04

#apt -y update 
# apt full-upgrade
# restart Ubuntu
# do-release-upgrade
$cat /etc/os-release

Advanced Settings

wsl.conf & .wslconfig
https://docs.microsoft.com/en-us/windows/wsl/wsl-config

IP Address

#ip addr  | grep 'global eth0'
# chmod +x /usr/bin/get-ip-addr

Automatic WSL services startup

https://www.how2shout.com/linux/how-to-start-wsl-services-automatically-on-ubuntu-with-windows-10-startup/

Packages

#dpkg -l|grep php |teepackages.txt
#dpkg --remove --force-remove-reinstreq awsvpnclient

Enable/Disable service

# systemctl disable nginx

WSL2 Problems

  1. Download latest WSL from Install section above
  2. Networking through NAT is unreliable, Issues still reported from Microsoft
  3. Can’t install php8.2 from binaries, need to compile
  4. use dockers ok

Available Disk space

https://learn.microsoft.com/en-us/windows/wsl/disk-space

PS>wsl --system -d <distribution-name> 
#df -h /mnt/wslg/distro

Compact vdisk:

PS>diskpart
DISKPART>select vdisk file="path\to\ext4.vhdx"
DISKPART>compact vdisk
DISKPART>detail vdisk
DISKPART>expand vdisk maximum=<sizeInMegaBytes>
DISKPART>exit

Expanding

# mount -t devtmpfs none /dev
# mount | grep ext4
# apt install resize2fs
# resize2fs /dev/sdX <sizeInMegabytes>M
# resize2fs 1.44.1 (24-Mar-2021)

Filesystem at /dev/sdb is mounted on /; on-line resizing required
old_desc_blocks = 32, new_desc_blocks = 38

The filesystem on /dev/sdb is now 78643200 (4k) blocks long.

Networking:

    • From your WSL distribution (ie Ubuntu), run the command:
      ip addr
    • Find and copy the address under the inet value of the eth0 interface.
    • If you have the grep tool installed, find this more easily by filtering the output with the command:
      ip addr | grep eth0
    • Connect to your Linux server using this IP address.

Execution Policy

Set-ExecutionPolicy -ExecutionPolicy ByPass  -Scope CurrentUser

Fix Signing

Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass

Network port forwarding:

PS>VMs\WSL\scripts\network.ps1
If (-NOT ([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")) {
$arguments = "& '" + $myinvocation.mycommand.definition + "'"
Start-Process powershell -Verb runAs -ArgumentList $arguments
Break
}

$remoteport = wsl get-ip-addr
$found = $remoteport -match '\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}';

if ($found) {
$remoteport = $matches[0];
}
else {
Write-Output "IP address could not be found";
exit;
}

$ports = @(5001, 19000, 19001);

for ($i = 0; $i -lt $ports.length; $i++) {
$port = $ports[$i];
Invoke-Expression "netsh interface portproxy delete v4tov4 listenport=$port";
Invoke-Expression "netsh advfirewall firewall delete rule name=$port";

Invoke-Expression "netsh interface portproxy add v4tov4 listenport=$port connectport=$port connectaddress=$remoteport";
Invoke-Expression "netsh advfirewall firewall add rule name=$port dir=in action=allow protocol=TCP localport=$port";
}

PS>Invoke-Expression "netsh interface portproxy show v4tov4";

Docker

Docker installation

# dnf install docker
# chkconfig docker on  <- auto start
# usermod -aG docker $USER <- Grants $USER root privileges # systemctl start docker # curl -L https://github.com/docker/compose/releases/download/v2.29.1/docker-compose-$(uname -s)-$(uname -m ) -o /usr/libexec/docker/cli-plugins/docker-compose # chmod +x /usr/libexec/docker/cli-plugins/docker-compose # chown root:root /usr/libexec/docker/cli-plugins/docker-compose # gpasswd -a $USER docker && sudo reboot <- adds user to docker group then reboot

Docker Hub

Docker Repository relocation

  1. Create PAT (Personal Access Token) from Account settings
$docker login -u <docker-id> -p <password | PAT> 
$echo "MySecretPa$$w0rd" | docker login -u <docker-id> --password-stdin

Move Docker Registry

# service docker stop
# mv /var/lib/docker /folder/to/new-docker-repo
# nano /etc/docker/daemon.json
{
"data-root": "/srv/new-drive/new-docker-root"
}
# rm -rf /var/lib/docker
# service docker start
$docker info -f '{{ .DockerRootDir}}'  <- Verify registry location

Clear Docker Local Repository

$ docker container prune -a -f
$ docker image prune -a -f
$ docker volume prune -a -f
$ docker buildx prune -f
$ docker system prune -a
$ docker image rm -f $(docker images -q) <- clear all cached images
$ docker rm -f $(docker ps -a -q) <- Removes all containers
$ docker rm $(docker ps --filter status=exited -q)
$ docker image rm -f <image-id>
$ docker image prune <- clears unusable non-accessible images
$ alias docker_ci='docker rmi $(docker images -a --filter=dangling=true -q)'
$ alias docker_cc='docker rm $(docker ps --filter=status=exited --filter=status=created -q)'
$ docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]

Operation Commands

$ docker ps -a
$ docker rm <image>
$ docker login -u <user-id>  <-- Access Token
$ docker tag <old-image> <new-image>
$ docker push <user-name>/<image-name> <= Image name must be tagged as <user-name>/<image-name>

Mount Volumes

$ docker volume create <volume-name>  ex. todo-db
$ docker run -v <volume-name>:<mount-point> <image-tag> -C "command"
ex.
$ docker run -dit \
-v /path/to/volume/edudate-view:/app/edudate-view \
-w /app/edudate-view \
timyshark/angular-14  "/bin/bash"
$ docker run -v <host-path>:<container-mount-point> <image> <command>

ex.

$ docker run -dp 3000:3000 \
-w /app -v "$(pwd):/app" \
node:12-alpine \
sh -c "yarn install && yarn run dev"

$ docker exec -it <container-id>"<command>"
ex.

$ docker exec -it 299485f3 "/bin/bash"

$ docker volume inspect <volume-name>
/lib/systemd/system/containerd.service

Detach from a session

$ docker run --name test_redis --sig-proxy=false -p 6379:6379 redis
$ docker tag local-image:tagname new-repo:tagname
$ docker push new-repo:tagname
$ docker start  `docker ps -q -l` # restart it in the background
$ docker attach `docker ps -q -l` # reattach the terminal & stdin
$ docker exec -it `docker ps -q -l` "/bin/bash"  # execute on the latest docker
$ docker exec -it "`docker ps | sed -n /$CONTAINER_NAME/p  | sed -e 's/\(^[[:xdigit:]]\{12\}\).*$/\1/g'`" /bin/bash #execute based on container name

Docker ENTRYPOINT

CMD The CMD instruction has three forms: (use double quotes ” not single ‘)
  • makes a layer on docker for the Commands,
  • CMD [“executable”,”param1″,”param2″] (exec form, this is the preferred form) no shell, use full path to executable if ENTRYPOINT is not specified
  • CMD [“param1″,”param2”] (as default parameters to ENTRYPOINT[“..”] if present) Docker execution combines ENTRYPOINT + CMD arrays of command into one
  • CMD command param1 param2 (shell form) uses “sh -c” => sh -c ‘command parms..’ <==> CMD [“/bin/sh”,”-c”, “command”, “parm1”, “parm2”]
Use this form CMD [ “sh”, “-c”, “echo $HOME” ] to execute shell substitutions Docker run commands override CMD commands
  • ENTRYPOINT [“executable”, “param1”, “param2”] (exec form) used dobule quotes ” not single ‘ full path executable,
  • ENTRYPOINT command param1 param2 (shell form) <- combines run or CMD commands as arguments, uses /bin/sh -c.
    ENTRYPOINT has to be last entry in Dockerfile, doesn’t receive SIGTERM unless use “exec” before command ex. ENTRIYPOINT exec top -b
  • docker run Option –entrypoint no “sh -c” used [“command”…]
Use ENTRYPOINT to run docker container as a service, by running a daemon in forground or just simply tail -f /dev/null command.

View logs:

$docker logs <container-id>

Ubuntu set timezone in Dockerfile

Installing packages that require input from user such as tzdata or mysql.. We need to set the the environment variable DEBIAN_FRONTEND=noninteractive,

One way to do it is environment variable has to be set in the same session (Each RUN creates a new session)

ENV TZ=”America/Vancouver”

RUN DEBIAN_FRONTEND=noninteractive TZ=America/Vancouver apt-get -y install tzdata && ln -fs /usr/share/zoneinfo/America/Vancouver /etc/localtime

 

If need to use  (-E option –preserve)

RUN sudo -E apt install -y tzdata

Windows: Move Docker  repo to different drive https://docs.docker.com/engine/reference/commandline/dockerd/
PS>cd E:\Programs\Docker\resources
PS> sc stop com.docker.service
PS> dockerd --unregister-service
PS> dockerd --register-service --data-root E:\ProgramData\docker
PS>sc start com.docker.service
Option 2:
$>cmd
$>start /w "" "Docker Desktop Installer.exe" install --installation-dir=E:\Programs\Docker
Change repository location (stored in wsl distro docker-desktop-data):
PS>wsl --shutdown
PS>wsl --export docker-desktop D:\VMs\DockerDesktopWSL\archive\docker-desktop.tar
PS>wsl --export docker-desktop-data D:\VMs\DockerDesktopWSL\archive\docker-desktop-data.tar
PS>wsl --unregister docker-desktop
PS>wsl --unregister docker-desktop-data
PS>wsl --import docker-desktop E:\VMs\DockerDesktopWSL\distro D:\VMs\DockerDesktopWSL\archive\docker-desktop.tar
PS>wsl --import docker-desktop-data E:\VMs\DockerDesktopWSL\data D:\VMs\DockerDesktopWSL\archive\docker-desktop-data.tar

IP Address of a container:

docker inspect -f “{{ .NetworkSettings.IPAddress }}” [container-name-or-id]

Login :

cat ~/my_password.txt | docker login –username foo –password-stdin

Linux Ubuntu Root Login

docker exec -u 0 -it mycontainer bash

 

–data-root=/var/snap/docker/common/var-lib-docker

Default conf: /var/snap/docker/current/config/daemon.json

 

 

Installation

$sudo apt -y update && sudo apt -y upgrade
$sudo apt install python3
$sudo apt install python3-pip
$sudo apt install python3-venv

Environment

$python3 -m venv whatever

Google SDK for Python

Installation

$cd your-project
$python3 -m venv google
------------ $source google/bin/activate <-- Linux
PS> venv\Scripts\activate <-- Windows
---------------------- (google)[your-project]$pip install google-auth (google)[your-project]$pip install google-api-python-client (google)[your-project]$pip install --upgrade oauth2client (google)[your-project]$deactivate
OR
requirements.txt
google-auth
google-api-python-client
oauth2client
$pip install -r ./requirements.txt

Authenticate for Google Drive

Python App Installer

$pip install -U pyinstaller
$pyinstaller your_program.py

Ensure copy folder .secrets to _internal folder, for deployment move the entire folder in dist/your_program to the destination ex. /usr/local/bin
to call the application $your-program 

Google Service Account

  • Service accounts are created per project basis,
  • Service accounts granted permissions to enable activities
  • Projects need to enable specific API to provide environment clearance to the member service account to operate, for example google drive API must be enabled to allow service account to make google drive APIs
  • Service Accounts generate Keys, allows gcloud and gcloud sdk to authenticate and grant permission, the service account key is a json file containing a private key to the project.
  • Service account is an email address my-service-account@your-project-id.iam.gserviceaccount.com, is the representative Identity that can be used to grant permission to other services (such as sharing Google Drive)
  • The google account that creates the service account is the owner by default.

Steps to create GCP Service Account SDK Archive

  1. create a service account in a project
  2. enable google drive API
  3. Grant share drives with Service account

If you like what you see, please share it.

About the author

Leave A Reply

For the love of learning, We welcome inquiries and design courses for you!

Courses run on demand, custom designed, Please send us a note and one of our team members will reach out to you.