96-fromsofia.net


Git server with gitweb interface running on ECS

Overview

In this article, we will be setting up our first ECS cluster and creating the required infrastructure to run our application. The application in this example will be a git server which will support:

  • cloning and pushing to existing repositories with ssh-key authentication
  • creating new repositories on the git server with s3 and datasync
  • visualizing our git repository with the web UI gitweb

Theare are quite a few resources we will need to set up - we’ll be doing this manually. I am planning to release cloud formation templates for such large builds in the foreseeable future but I also see a benefit in knowing how to perform this process manually.

The infrastructure will consist of the following:

  • ECS cluster to run our task.
  • Task definition to run our containers within the cluster.
  • Task service to ensure our task is running.
  • ECR repository for our docker images.
  • Elastic IP for our EC2 cluster instance.
  • EC2 launch template for our ASG.
  • An ASG with a fixed capacity working as the compute resource behind the ECS cluster.
  • An EFS filesystem for highly available and resilient persistent storage in our containers.
  • Security groups for the EC2 instance and EFS file system.
  • IAM role for the EC2 cluster instance.
  • IAM user for the aws-cli.
  • S3 bucket for creating new git repositories and uploading new letsencrypt SSL certificates.
  • DataSync distribution for replication between S3 and EFS.

That being said, this article assumes the following components are already present and won’t be covered here.

  • AWS account with an IAM user having the necessary permissions for creating the above infrastructure.
  • Either a domain or subdomain for your git server. Replace 96-fromsofia.net in the instructions with your own domain.
  • An AWS VPC with a public subnet accessible from the internet.

With all of the above covered, it’s now time to dive in and get our hands dirty.

Step 1 - Setting up the storage services and replication

As explained, we will be using S3 and EFS from AWS’ storage portfolio.

EFS will be used to mount some directories into our containers for persistent storage. My reasoning behind choosing EFS rather than the standard EBS volumes is the ability to simultaneously mount the same directory across different instances and the high availability and automatic backups that EFS comes with. This means I don’t have to worry about backups for the EC2 instances’ EBS volumes which makes the EC2 instances easily replaceable compute resources - this in my opinion is a perfect fit for container workloads.

Type EFS in the search box from your AWS console and select the service.

Click on Create file system, add a name for the filesystem and select your VPC.

Image

Click on create. If you go over to the Network tab from the EFS view, you will see your file systems are being created.

Image

This will take some time but you can proceed with creating the other resources and we will come back to EFS a bit later.

The S3 bucket will be used for 2 things: uploading new repositories (not cloning/pushing) to the git container and uploading new SSL certificates to the gitweb container.

This way, I can increase my containers’ security by disabling the tty on the git container and allowing ssh connections to only use the git-shell, meaning ssh connections will refuse to provide a shell and scp won’t work. As per the gitweb container, that means I wouldn’t have to be running an ssh server at all on it and the only thing it has to expose is the website via ports 80 and 443.

Head over to the S3 console and create a new bucket with all the default settings, ex. public access blocked, ACLs disabled etc.

Step 2 - Creating our IAM resources

We will need to manually create 2 resources in the IAM console.

The IAM role will be used by the EC2 cluster instance and will have the standard execution managed policy for ECS instances and a customer policy we will create as well. The customer policy is required, so our instance has the permission to assign an elastic IP to itself. Later, when we are creating our launch template and auto-scaling policy, you will see why this is important.

First thing however we will need to make sure is having an Elastic IP registered to the region where our EC2 cluster instance will run.

In the EC2 console on the left side panel under Network and Security go to the Elastic IP dashboard.

Image

Once you have the Elastic IP allocated, you will be able to see it’s allocation-id. Note this down as it will be required in the next step.

Image

For the policy go to the IAM console and under Access Management select Policies.

Image

Click on Create a policy and choose the JSON option.

Image

Paste the following IAM policy into the editor, making sure you replace the <ACCOUNT_ID> and <ALLOCATION_ID> appropriately.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "EC2AssociateEIP",
            "Effect": "Allow",
            "Action": "ec2:AssociateAddress",
            "Resource": [
                "arn:aws:ec2:*:<ACCOUNT_ID>:elastic-ip/<ALLOCATION_ID>",
                "arn:aws:ec2:*:<ACCOUNT_ID>:instance/*"
            ]
        }
    ]
}

Review your changes and save the policy.

Image

Now for the role, still from the IAM console select Roles from the side menu and choose Create a role.

For the Trusted entity type choose AWS Service and under Common use cases select EC2 and proceed.

Image

From Add permissions, look for the policy you just created and select it.

Image

Before proceeding, look for the AmazonEC2ContainerServiceEC2Role managed role and make sure it is selected as well.

Image

Continue and on the following screen enter a name for your role and a brief description of it’s intended use. Click on Create role.

Image

An IAM user with access to datasync and s3 will be required too so we can upload data from our local machine over to the bucket and also execute the datasync replication task remotely from our workstation. Additionally, for convenience we will want to allow this user access to the ECR repositories so we can use it to push our docker images.

Again from the IAM console’s side panel select the Users option. Access to the management console will not be required.

Enter a username and create your user.

Image

Click on your newly created user and select the Security Credentials tab.

Image

Under Access Keys click on Create an Access Key. For our setup the top option would suffice.

Image

When you click on Next your Access key will be displayed alongside your Secret Access Key. The Secret Access key is shown only once at this screen and can never be retrieved again. If you don’t write it down now, you will need to create a new Access and Secret key combination and note down their values.

Finally, go to the Permissions tab and ensure you have added the following 3 managed policies.

  • AmazonEC2ContainerRegistryFullAccess
  • AmazonS3FullAccess
  • AWSDataSyncFullAccess

Image

Step 3 - Creating Security Groups

We will need 2 security groups for this setup. One will be applied to our EC2 cluster instance and allowing inbound traffic for SSH, HTTP/S and a custom TCP port.

The other group will be applied to the EFS file system and will naturally allow NFS traffic, but from the EC2 cluster instance’s security group only.

Go over to the EC2 console and under Network and Security click on Security Groups.

Image

Enter a name and description for the security group and make sure to select the VPC you are planning to use with EC2 cluster instance.

Image

The inbound Rules for the EC2 cluster instance are as follows.

Image

For SSH access I have chosen to allow connections originating only from the VPC’s private CIDR range. You can choose Anywhere and expose SSH to the world but I prefer to use a bastion in my VPC to connect to production instances.

The custom TCP port will be bound to port 22 on the git container. The port number should be chosen randomly here. This will allow us to use SSH for git clone and push commands.

The HTTP and HTTPS rules will be used by the gitweb container.

Follow the same process for the second security group and add only one inbound rule.

Image

The Type should be set to NFS. Set the source to custom and enter the first security groups ID. This will ensure all NFS connections are allowed from whatever resources are part of the EC2 cluster instance’s security group.

Now that the EFS security group has been created, head over to the EFS console and select your file system.

Under the Network tab click the button to modify your file systems. From the security group dropdown, select the EFS security group we just created.

Image

Make sure to repeat the process for each of the file systems listed here.

Image

Step 4 - Create the DataSync replication tasks

Datasync will be playing an important role because it will provide a way for us to replicate the S3 data on demand over to the EFS file system.

I was planning to fully automate the process so the s3 objects have a relatively short TTL and an eventbridge rule kicks off the data sync tasks upon object creation. Unfortunately, datasync is not a valid target for eventbridge so I couldn’t implement this. I am having an idea to automate this with Lambda in the near future, however for the time being the upload process will be somewhat manual.

To create the datasync tasks in your AWS account, head over to the Datasync console. On the right side Getting started menu, chose the Between AWS services dropdown option.

Image

On the next screen choose new location and specify the S3 bucket created earlier.

I will be creating 2 datasync tasks, one that pushes everything uploaded to s3://bucket/git/ folder to a specific path in EFS and another one for s3://bucket/letsencrypt.

The IAM role should be auto-generated at this same screen if it doesn’t already exist.

Image

For the destination target again choose a new location and choose the EFS service. Select your region and file system ID.

For the mount path you can technically select whatever path you want as long as you also create it in the EFS file system. We don’t need the path to exist to create the replication task but we have to make sure we create it later before launching the replication task.

Finally, select the security group. Here you should select the EC2 cluster instance security group and NOT the EFS one. Remember, datasync will want to access and write to the EFS, and we granted access to EFS from the EC2 cluster instance’s security group.

Image

On the next screen you can modify any of the existing replication settings. I personally chose to stick with the defaults and only verify the transferred data’s integrity and transfer only changed data from the source to reduce costs from redundant network traffic.

Image

A cloud watch group will need to be selected. If you don’t have one existing, you can choose to auto-generate it here.

Image

Once your first replication task has been created, follow the same process to create another one for the letsencrypt SSL certificates. The only difference should be the bucket: s3://bucket/letsencrypt/ and EFS: /git.96-fromsofia.net/gitweb/letsencrypt/git.96-fromsofia.net/ paths.

Again the EFS path can be set to whatever you like as long as you make sure to create it.

Image

Step 5 - Creating our EC2 infrastructure

In this step, we create our EC2 launch template that our ASG will use to spawn instances. This ASG in return will serve as the ECS cluster’s capacity provider.

The launch template is a complete and somewhat extensive set of instructions you can set and use to launch EC2 instances with all these configuration options pre-applied.

To create a launch template go to the EC2 console and under Instances select Launch Templates.

Image

Click to create a launch template and you will be taken to the template configuration section. The first thing we will need is to select an AMI.

You can choose any AMI and install the ECS agent on your instance manually or via a user-data script, however I find using the amazon ecs optimized AMI a much more convenient option.

To choose the ecs optimized AMI, type ecs in the search box for the AMIs and hit enter. You will be taken to Marketplace where you can select the latest amazon linux ecs optimized AMI, select it and you will be taken back to the launch template screen.

Image

Next, if you are planning to have access to your Docker host or in this case ECS container instance you will probably also want to specify a private key for the instance. Make sure you have one generated and select it.

Following should be selecting your subnet and security group. The security group should be the one you created for the EC2 cluster instance.

Expand the Advanced network configuration section and make sure that Auto-assign public IPv4 is Enabled! This is required so we can get a public IPv4 DNS which ECS needs to register our instance.

Image

Scroll all the way to the bottom of the page and expand the Advanced details section.

Under IAM instance profile, make sure you have selected the IAM role we created in Step 2.

Under DNS Hostname ensure ‘Enable resource based IPv4 (A record) DNS requests’ is enabled

Image

Now, at the very bottom of the page under User data paste the script I’ve provided and amend the following values.

<CLUSTER_NAME> Set this to the name of your ecs cluster. We haven’t created the cluster yet but just make sure whatever value you choose here is the value you name your cluster too.

<ALLOCATION_ID> This is the same allocation ID for the elastic IP we needed earlier when creating our custom IAM permission policy.

Finally, make sure to add one empty line at the end of the file as your last command may otherwise not be executed.

#!/bin/bash
# Update the OS and instally the unzip package
yum -y update
yum install -y unzip
# Add your cluster name to the ECS_CLUSTER variable so the ecs daemon can register your instance
echo ECS_CLUSTER="<CLUSTER_NAME>" >> /etc/ecs/ecs.config
# Download and install the AWS CLI
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
./aws/install
# Allocate your elastic IP with the EC2 instance
/usr/local/bin/aws ec2 associate-address --instance-id `curl http://169.254.169.254/latest/meta-data/instance-id` --allocation-id <ALLOCATION_ID>
# Restart the ecs daemon so your instance registers with the cluster
systemctl restart ecs

With all of our setting applied, we can now save your launch template and head over to the ASG dashboard

Image

Click on Create an auto scaling group and enter the name of the ASG in the pop-up screen. Make sure to also select the launch template you’ve just created.

Image

In the next step, make sure to select your VPC as well as at least 2 AZs. The reason behind this is in case of an AZ maintenance or the unlikely cause of a disaster your EC2 instance can be span up in another AZ ensuring availability of your containers.

Image

The next step heavily depends on what you are willing to spend.

I am trying to keep costs minimized with this project and I am the only person working with this git server. The gitweb interface is mostly up there as part of my visual portfolio and sees very little traffic. Although I do treat this as a production system, it is essentially a personal website. That being said, I am not creating a load balancer and in this setup I will leave the defaults as they are.

Image

On the next screen is the desired capacity.

If you were to use a load balancer on the previous step, then you can remove the user data script altogether from your launch template as the public IP will be on the load balancer. In my case however, I am setting all the min max and desired instance count to 1 to ensure 1 instance only is always running.

Image

Save your settings and create the ASG. In a few minutes, you will see an instance being started in the EC2 console. This is your EC2 cluster instance.

Step 6 - Create the EFS file system structure

Now for this step there are different approaches you can take and depending on what access you have to your EC2 cluster instance you may be able to get away only with it. I personally have allowed SSH access to my EC2 instance only from the internal network in the VPC and I have other public instances serving as bastions.

You will need to create some directories within the EFS filesystem, which means you will need access to an EC2 instance which is associated with the EC2 cluster instance security group and subnet.

When you click on the EFS filesystem you created earlier in the AWS console, you will see a button Attach. Click on it and you will be provided with commands you can use internally in your VPC to mount the EFS file system.

Copy the command given to you and mount the EFS partition in your EC2 instance. If you want to follow this guide you will need to ensure you have at least 4 different directories. You don’t have to use the same directory structure and names as I have but will need to have 4 different directories EFS can export.

# mkdir /mnt/efs ; sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport 10.55.10.125:/ /mnt/efs
# df -hTP /mnt/efs
Filesystem     Type  Size  Used Avail Use% Mounted on
10.55.10.125:/ nfs4  8.0E     0  8.0E   0% /mnt/efs
# mkdir -p /mnt/efs/git.96-fromsofia.net/{git,gitweb/{custom,letsencrypt/git96-fromsofia.net,log}}/
# ls -ld  /mnt/efs/git.96-fromsofia.net/{git,gitweb/{custom,letsencrypt/git.96-fromsofia.net,log}}/
drwxr-xr-x 2 root root 6144 Jan 27 00:46 /mnt/efs/git.96-fromsofia.net/git/
drwxr-xr-x 2 root root 6144 Jan 27 00:46 /mnt/efs/git.96-fromsofia.net/gitweb/custom/
drwxr-xr-x 2 root root 6144 Jan 27 00:46 /mnt/efs/git.96-fromsofia.net/gitweb/letsencrypt/git.96-fromsofia.net/
drwxr-xr-x 2 root root 6144 Jan 27 00:46 /mnt/efs/git.96-fromsofia.net/gitweb/log/
# df -hTP  /mnt/efs/git.96-fromsofia.net/{git,gitweb/{custom,letsencrypt/git.96-fromsofia.net,log}}/
Filesystem     Type  Size  Used Avail Use% Mounted on
10.55.10.125:/ nfs4  8.0E     0  8.0E   0% /mnt/efs
10.55.10.125:/ nfs4  8.0E     0  8.0E   0% /mnt/efs
10.55.10.125:/ nfs4  8.0E     0  8.0E   0% /mnt/efs
10.55.10.125:/ nfs4  8.0E     0  8.0E   0% /mnt/efs
# chown git:git -R /mnt/efs/git.96-fromsofia.net/git/

Now that the EFS file system has been mounted, you can copy your static data into the ../gitweb/custom folder. The static data in my case consists of a favicon.ico, a git-logo.png and a home_text.html.

These are optional gitweb values to amend the favicon, the logo displayed in the right top corner and the greeter message on your default projects view.

$ ls -l 
total 16
-rw-rw-r-- 1 kare5434 kare5434  318 Jan 28 04:11 favicon.ico
-rw-rw-r-- 1 kare5434 kare5434 4112 Jan 28 04:12 git-logo.png
-rw-r--r-- 1 kare5434 kare5434  250 Jan 28 04:11 home_text.html
# Replace the below with the credentials for the EC2 instance where the EFS is mounted
$ scp -i <SSH_KEY> * ec2-user@54.194.64.190:~/
favicon.ico                                                                                                                                                                                                100%  318    13.2KB/s   00:00    
git-logo.png                                                                                                                                                                                               100% 4112   144.5KB/s   00:00    
home_text.html                                                                                                                                                                                             100%  250    13.8KB/s   00:00    
$ ssh -i <SSH_KEY> ec2-user@54.194.64.190
$ sudo cp favicon.ico git-logo.png home_text.html /mnt/efs/git.96-fromsofia.net/gitweb/custom/
$ ls /mnt/efs/git.96-fromsofia.net/gitweb/custom/
favicon.ico  git-logo.png  home_text.html
$ 

Step 7 - Registry repository and containers

In this step, we will get to build our Docker containers and create the ECR registry repository. The repository will be hosting our docker images and is where the ECS cluster pulls them from to run our tasks.

From the search box in the AWS console, type ECR and you should see the container registry service appear. Click on it. Select Create repository.

One thing that is quite important here is to make sure your Repository name matches the exact name of your image. For example, if your image is called my-image make sure to enter my-image in the Repository name box here.

Ensure Private is selected under Visibility settings and create the repository.

Image

Create a repository for each image. At the end, they will both appear in the console as such.

Image

Remember that IAM user we created in step 2 where you had to note down the secret access key - now, you will need it.

The next step is quickly setting up the AWS CLI and our development environment. This can be an on-prem VM, your Linux laptop or another EC2 instance as long as the machine can connect to the internet.

Use the following commands to download, install and configure the AWS cli. Make sure to amend the <ACCESS_KEY>, <SECRET_KET> and <REGION> values accordingly

$ aws configure
AWS Access Key ID [None]: <ACCESS_KEY> 
AWS Secret Access Key [None]: <SECRET_KEY>
Default region name [None]: <REGION>
Default output format [None]: text
$ 

On my development machine I use RHEL which comes with the OCI runtime - an alternative to Docker (kinda). You are absolutely free to use either Docker if you prefer it, just replace every ‘podman’ in my commands with ‘docker’.

The git-server image will consist of 2 files: a Dockerfile and a public ssh-key. The image will install the git and openssh-server packages, create a git user with access to the git-shell, make the git repositories mount target and run the SSH server.

ssh-key.pub (example):

ssh-rsa <SSH-KEY> <USERNAME>

Dockerfile:

# Select base image and install your packages
FROM ubi9:latest
RUN yum install -y git openssh-server
RUN ssh-keygen -A

# Setup the git user and create the git repo mount target
RUN useradd -s `which git-shell` -u 3091 git
RUN mkdir -p /srv/git

# Add your public key for passwordless ssh authentication with git interactions
RUN mkdir /home/git/.ssh && chmod 700 /home/git/.ssh
ADD ./ssh-key.pub /home/git/.ssh/authorized_keys
RUN chmod 600 /home/git/.ssh/authorized_keys && chown git:git -R /home/git/.ssh

# Secure git-shell against: CVE-2017-8386 
RUN sed -i '1s/^/no-pty /g' /home/git/.ssh/authorized_keys

# Expose port 22 and run the ssh daemon
EXPOSE 22
CMD ["/usr/sbin/sshd","-D"]

To build your container run:

$ podman build -t git-server:01 .
STEP 1/11: FROM ubi9:latest
...
STEP 11/11: CMD ["/usr/sbin/sshd","-D"]
COMMIT git-server:01
--> 8ca0f46e00a
Successfully tagged localhost/git-server:01
8ca0f46e00a6babb59e85ee4291e423b2751520fade7623590a65a1efb177964
$

Now go to the ECR dashboard in the AWS console, click on one of your repositories and choose select the View push commands button. You will be given a few commands.

Run the first one that has the ‘docker login’ part in it to authenticate to the ECR repository

$ aws ecr get-login-password --region eu-west-1 | podman login --username AWS --password-stdin <ACCOUNT_ID>.dkr.ecr.eu-west-1.amazonaws.com
Login Succeeded!
$

Next, tag the image you just build to the ECR repository with the third command provided from the AWS console. Finally, push the image with the last command provided.

$ podman tag git-server:01 <ACCOUNT_ID>.dkr.ecr.eu-west-1.amazonaws.com/git-server:01
$ podman push <ACCOUNT_ID>.dkr.ecr.eu-west-1.amazonaws.com/git-server:01
Getting image source signatures
Copying blob ce9d08ae8f99 done  
Copying blob 3e21192a6b9f done  
Copying blob 3b35f4e55edf done  
Copying blob 68976608e193 done  
Copying blob 4cd90d481301 done  
Copying blob 495c40d4b8da done  
Copying blob 901b7cd805e7 done  
Copying blob 2cf3f880f1fe done  
Copying blob f25c97517522 done  
Copying config 8ca0f46e00 done  
Writing manifest to image destination
Storing signatures
$ 

For the gitweb image, you have to use the exact same approach for building and pushing it. The files associated with this container are 3: a Dockerfile, gitweb.conf for the configuration of the gitweb interface and the git.domain.com.conf apache virtual host file.

gitweb.conf (change domain names accordingly):

our $projectroot = "/srv/git/";
our $logo = "custom/git-logo.png";
our $logo_url = "https://96-fromsofia.net";
our $favicon = "custom/favicon.ico";
our $site_name = "96-fromsofia.net Git Repository";
our $home_link_str = "96-fromsofia.net >> Git Projects";
our $home_text = "custom/home_text.html";
our $omit_age_column = "true";
our $omit_owner = "true";
our $my_url = "http://git.96-fromsofia.net/gitweb.cgi";
our $base_url = "http://git.96-fromsofia.net/";
our $snapshot = "false";
our $timed = "true";

git.domain.com.conf (change domain names accordingly):

<VirtualHost *:80>
    ServerName git.96-fromsofia.net
    ServerAlias www.git.96-fromsofia.net
    DocumentRoot /var/www/gitweb
    
    # Force HTTP to HTTPS redirects
    #RewriteEngine On
    #RewriteCond %{HTTPS} off
    #RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
    
    <Directory /var/www/gitweb>
        SetEnv  GITWEB_CONFIG  /etc/gitweb.conf
        Options +ExecCGI +FollowSymLinks +SymLinksIfOwnerMatch
        AllowOverride All
        order allow,deny
        Allow from all
        AddHandler cgi-script .cgi
        DirectoryIndex gitweb.cgi
    </Directory>
    <Files gitweb.cgi>
        SetHandler cgi-script
    </Files>
    
    CustomLog /var/log/httpd/git.96-fromsofia.net-access.log combined
    ErrorLog /var/log/httpd/git.96-fromsofia.net-error.log
    
    # Possible values include: debug, info, notice, warn, error, crit,
    # alert, emerg.
    LogLevel warn
</VirtualHost>

<VirtualHost *:443>
    ServerName git.96-fromsofia.net
    ServerAlias www.git.96-fromsofia.net
    DocumentRoot /var/www/gitweb
    <Directory /var/www/gitweb>
        SetEnv  GITWEB_CONFIG  /etc/gitweb.conf
        Options +ExecCGI +FollowSymLinks +SymLinksIfOwnerMatch
        AllowOverride All
        order allow,deny
        Allow from all
        AddHandler cgi-script .cgi
        DirectoryIndex gitweb.cgi
    </Directory>
    
    CustomLog /var/log/httpd/git.96-fromsofia.net-access.log combined
    ErrorLog /var/log/httpd/git.96-fromsofia.net-error.log
    
    # Possible values include: debug, info, notice, warn, error, crit,
    # alert, emerg.
    LogLevel warn
    
    # SSLEngine On
    # SSLProtocol                         all -SSLv2 -SSLv3
    # SSLCipherSuite                      ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS
    # SSLHonorCipherOrder                 on
    # SSLOptions                          +StrictRequire
    # SSLCertificateFile                  /etc/letsencrypt/live/git.96-fromsofia.net/cert.pem
    # SSLCertificateKeyFile               /etc/letsencrypt/live/git.96-fromsofia.net/privkey.pem
    # SSLCertificateChainFile             /etc/letsencrypt/live/git.96-fromsofia.net/fullchain.pem
    
    <Files gitweb.cgi>
        SetHandler cgi-script
    </Files>
    
    BrowserMatch "MSIE [2-6]" nokeepalive ssl-unclean-shutdown downgrade-1.0 force-response-1.0
    BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown
</VirtualHost>

Pay attention to how the SSL lines are currently commented out. We want to keep it this way for now as we haven’t yet acquired our SSL certificates.

Dockerfile:

# Select your image
FROM ubi9:latest

# Install your packages and the perl modules required for gitweb
RUN yum install -y git make diffutils httpd php php-cli mod_fcgid perl-FCGI perl-filetest perl-Time-HiRes mod_ssl
RUN yum install -y https://dl.rockylinux.org/pub/rocky/9/AppStream/x86_64/os/Packages/p/perl-CGI-4.51-5.el9.noarch.rpm
RUN yum install -y https://rpmfind.net/linux/epel/9/Everything/x86_64/Packages/p/perl-FreezeThaw-0.5001-37.el9.noarch.rpm
RUN yum install -y http://repo.iotti.biz/CentOS/9/noarch/perl-CGI-Session-4.48-26.el9.lux.noarch.rpm  

# Configure the apache web server
RUN rm -f /etc/httpd/conf.d/*.conf
RUN chown apache:apache -R /etc/httpd/logs/
RUN openssl dhparam -out /etc/httpd/dh4096.pem 4096
RUN echo 'SSLOpenSSLConfCmd DHParameters /etc/httpd/dh4096.pem' >> /etc/httpd/conf.modules.d/00-ssl.conf
RUN sed -i 's/Listen 80/Listen 80\nListen 443/g' /etc/httpd/conf/httpd.conf
# RUN echo -e '<IfModule mod_ssl.c>\n    Listen 443\n</IfModule>' >> /etc/httpd/conf/httpd.conf
ADD git.96-fromsofia.net.conf /etc/httpd/conf.d/
ADD gitweb.conf /etc/

# Configure the git user and build the gitweb script
RUN useradd git
RUN mkdir /srv/git
RUN chown git:git /srv/git
USER git
WORKDIR /home/git
RUN git clone git://git.kernel.org/pub/scm/git/git.git
WORKDIR git/
RUN make GITWEB_PROJECTROOT="/srv/git" prefix=/usr gitweb
USER root
RUN cp -Rf gitweb /var/www/
RUN mkdir /var/www/gitweb/custom
RUN rm -rf /home/git/git

# Expose HTTP/S and run the apache web server
EXPOSE 80 443 
CMD ["/usr/sbin/httpd","-D","FOREGROUND"]

Before continuing to the next step, you should have both of your images uploaded into the ECR repositories.

Step 8 - Creating the ECS infrastructure

Our ECS infrastructure consists of a lot of things that we already created like the EC2 ASG and the EFS filesystem, however the actual container orchestration consists of 3 parts within ECS.

  • The cluster, which takes care of ensuring your services are running and allocated to their respective capacity providers.
  • The services determine in what manner your tasks are started and how they are maintained.
  • The tasks define which container or group of containers to run and what options to start them with. The task provides a similar role to a docker compose file.

We will begin by creating our ECS cluster. From the AWS ECS console’s side panel, click on Clusters. Select Create a cluster.

Enter the name of your cluster, select the correct VPC and choose at least 2 subnets. Proceed.

Image

On the next screen select Amazon EC2 Instances as the capacity provider and from the dropdown menu select the ASG we created earlier. Create the cluster.

Image

Now if you go into the newly created cluster and click on Infrastructure you will see the ASG listed under capacity providers but most likely there will 0 instances. This is because our EC2 cluster instance was already running before we created the cluster.

You can take one of 2 choices here, either log in to your EC2 cluster instance and restart the ECS agent: systemctl restart ecs OR terminate the running EC2 cluster instance and have the ASG spawn a new one. The last command in our user-data will cause the EC2 cluster instance to get registered with the ECS cluster upon boot.

Wait until you can see the EC2 cluster instance within the Infrastructure section and proceed with creating the task definition.

Again from the side panel in the ECS console, this time select Task definitions.

Image

Under Container one, populate the Name, Image URI and port mappings.

Make sure the Image URI includes the exact tag as well because if you don’t have a latest tag this will fail. Correct syntax should be: aws-repository/your_image:your_tag.

Image

For the second container define your gitweb build.

Image

On the next page under App environment, I’ve specified the following values. The container sizes are absolutely optional and can be left blank.

Image

Now under Volumes, click Add a volume and create a new volume for each of your 4 EFS directories. The 4 volumes I created look as such:

Name: git-efs
Root directory: /git.96-fromsofia.net/git
Name: gitweb-custom
Root directory: /git.96-fromsofia.net/gitweb/custom
Name: gitweb-log
Root directory: /git.96-fromsofia.net/gitweb/log
Name: gitweb-letsencrypt
Root directory: /git.96-fromsofia.net/gitweb/letsencrypt/git.96-fromsofia.net

Image

Below the volumes, you should have an option to add mount points. These mount points represent the mount targets within your containers for the volumes you just created.

  • git-efs should be mounted on /srv/git on both containers
  • gitweb-custom should be mounted on the gitweb container under /var/www/gitweb/custom
  • gitweb-logs should be mounted on the gitweb container under /var/log/
  • gitweb-letsencrypt should be mounted on the gitweb container under /etc/letsencrypt/live/git.96-fromsofia.net The final result should look similar to the following:

Image

After your task has been created you can now create the final piece of infrastructure and this will be your ECS service.

A task on its own is only that. If we want to run it, we need to manually do so. If we want to stop it - again, we need to manually terminate it. This is where Services play a crucial role, they have a desired number of tasks that are started and kept running. If the underlying EC2 instance fails, your ASG will start a new one and the ECS service will ensure your task has been started.

Go into the cluster you created earlier and click on the Services tab. Select create a service.

In the environment section, select Launch type and set it to EC2.

Image

Under Deployment configuration, select Service and choose your task. As the Service Type select Daemon.

Image

For the additional details, I’ve kept the defaults as they meet the requirements of my use case.

Image

Step 9 - Upload your content to EFS

As we created our service, our task should now be running, however it may have either failed or the gitweb interface will be showing a 404 - no projects found. This is because we only uploaded our static data under the custom EFS folder.

Let’s upload our git repositories. If you have a local bare git repository you created you can upload it to the S3 bucket.

Alternatively, you can clone an existing remote repository and make a bare clone of it locally which you then push to S3.

$ git clone old.git.96-fromsofia.net:/srv/git/Docker
$ git clone --bare Docker Docker.git
$ find Docker.git/ -type d -empty -exec touch {}/.empty \;

The last line is important if you have cloned a bare repository, as S3 will not upload directories with no files in them. This is why we create an empty file called .empty in every empty directory so our gitweb script sees them as valid git repositories.

Go through the same process for every other git repository you want to clone. You can then use the aws s3 cp command to upload your repos to the S3 bucket.

$ ls
'BASH Scripts.git'   Docker.git
$ aws s3 cp $PWD s3://gitupload.96-fromsofia.net/git/ --recursive
upload: BASH Scripts.git/branches/.empty to s3://gitupload.96-fromsofia.net/git/BASH Scripts.git/branches/.empty
upload: BASH Scripts.git/HEAD to s3://gitupload.96-fromsofia.net/git/BASH Scripts.git/HEAD
...
upload: Docker.git/refs/heads/.empty to s3://gitupload.96-fromsofia.net/git/Docker.git/refs/heads/.empty
upload: Docker.git/objects/pack/pack-461ab6b7bef1bac64b5e387d54efe771cb227aec.pack to s3://gitupload.96-fromsofia.net/git/Docker.git/objects/pack/pack-461ab6b7bef1bac64b5e387d54efe771cb227aec.pack
$ 

Now again from the aws-cli we will start the DataSync replication task.

First you need to list the available tasks. Next run the git-sync task we created earlier. Once you have started it you can monitor the task with the watch command until the process has finished.

$ aws datasync list-tasks
TASKS   git-sync        AVAILABLE       arn:aws:datasync:eu-west-1:<ACCOUNT_ID>:task/task-<TASK_ID>
TASKS   letsencrypt-sync        AVAILABLE       arn:aws:datasync:eu-west-1:<ACCOUNT_ID>:task/task-<TASK_ID>
$ aws datasync start-task-execution --task-arn arn:aws:datasync:eu-west-1:<ACCOUNT_ID>:task/task-<TASK_ID>
arn:aws:datasync:eu-west-1:<ACCOUNT_ID>:task/task-<TASK_ID>/execution/exec-08d3ca972f8ed5ed4
$ watch -n 30 aws datasync list-task-executions
...
Every 30.0s: aws datasync list-task-executions                                                                                                                        ip-10-0-151-137.eu-central-1.compute.internal: Sat Jan 28 03:32:02 2023

TASKEXECUTIONS  LAUNCHING	arn:aws:datasync:eu-west-1:<ACCOUNT_ID>:task/task-<TASK_ID>/execution/exec-08d3ca972f8ed5ed4

...
TASKEXECUTIONS  SUCCESS	arn:aws:datasync:eu-west-1:<ACCOUNT_ID>:task/task-<TASK_ID>/execution/exec-08d3ca972f8ed5ed4

Viola, go back to the ECS console and see if your task is still running, if not restart the service and wait for the task to come back online.

Make sure that the A record for git.domain.com (or whatever you want your gitweb interface to be accessible at) is pointing to the elastic IP we provisioned earlier.

If you navigate to your gitweb domain you should see the gitweb script running, your favicon, git-log and greeting text are all found and your git repositories listed.

Image

Looking good, right? However, you should notice that your browser has an unhappy looking red padlock next to the address bar indicating some sort of security issues. This makes sense considering our vhost listens for 443 connections but has no SSL certificate.

Step 10 - Adding an SSL certificate.

This next step is optional, however considering how easy it is to set up SSL certificates with letsencrypt I issues one for git.96-fromsofia.net and will walk you through the process of acquiring one yourself. There are different ways of doing acme validation and acquiring letsencrypt certificates, however I’ve decided to use a certbot plugin that allows DNS validation.

I chose this method as it makes it easier to just request and get new certificates when I need them, upload them to s3 and replicate them over to my running container. I don’t need to reconfigure my containers, share additional paths or access them. All I need is my local machine and access to my DNS hosted zone.

I’ve created a Containerfile you can build and run locally to get the certbot-dns validator started.

Dockerfile:

# Select your image
FROM ubi9:latest

# Install epel, wget and certbot. Download the acme-dns validator
RUN yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm
RUN yum install wget certbot -y
RUN wget https://github.com/joohoi/acme-dns-certbot-joohoi/raw/master/acme-dns-auth.py

# Amend the python script, make it executable and place it in /etc/letsencrypt
RUN sed -i 's/python/python3/g' acme-dns-auth.py 
RUN chmod +x acme-dns-auth.py 
RUN mv acme-dns-auth.py /etc/letsencrypt/

# Make the output dir for the SSL certificates and create the startup script
RUN mkdir /letsencrypt
RUN echo -e 'certbot certonly --manual --manual-auth-hook /etc/letsencrypt/acme-dns-auth.py --preferred-challenges dns --debug-challenges --email mailbox@domain.com --agree-tos --no-eff-email -d git.example.com -d www.git.example.com && cp -aL /etc/letsencrypt/live/git.example.com/* /letsencrypt' > /.startup.sh

# Make your startup script executable and run it
RUN chmod +x /.startup.sh
CMD ["/bin/bash","/.startup.sh"]

Build the container using:

$ podman build -t certbot:01 .
STEP 1/11: FROM ubi9:latest
STEP 2/11: RUN yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm
...
COMMIT certbot:01
--> f05d4449b01
Successfully tagged localhost/certbot:01
f05d4449b0121d5d65af09257c44f854ca40bb9cc86c8017fc61a944451624d3
$ 

Make sure there is a letsencrypt directory in your current location and run the container image as such:

$ podman run -it -v ./letsencrypt/:/letsencrypt:Z certbot:01
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Account registered.
Requesting a certificate for git.96-fromsofia.net and www.git.96-fromsofia.net
Hook '--manual-auth-hook' for git.96-fromsofia.net ran with output:
 Please add the following CNAME record to your main DNS zone:
 _acme-challenge.git.96-fromsofia.net CNAME 453f73dc-8df0-40cd-86e3-30b7c3565264.auth.acme-dns.io.
Hook '--manual-auth-hook' for www.git.96-fromsofia.net ran with output:
 Please add the following CNAME record to your main DNS zone:
 _acme-challenge.www.git.96-fromsofia.net CNAME 6e11249a-277a-48ad-80c9-78f55d51ca22.auth.acme-dns.io.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Challenges loaded. Press continue to submit to CA.
Pass "-v" for more info about challenges.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Press Enter to Continue

At this point, copy the two lines that look as such:

_acme-challenge.git.96-fromsofia.net CNAME 453f73dc-8df0-40cd-86e3-30b7c3565264.auth.acme-dns.io.
_acme-challenge.www.git.96-fromsofia.net CNAME 6e11249a-277a-48ad-80c9-78f55d51ca22.auth.acme-dns.io.

And add the CNAME values for each of these _acme subdomains in your DNS hosted zone. Give it a few minutes based on the TTL settings you have for your DNS and hit Enter.

You should now have your SSL certificates generated and placed into the local letsencrypt directory.

Successfully received certificate.
Certificate is saved at: /etc/letsencrypt/live/git.96-fromsofia.net/fullchain.pem
Key is saved at:         /etc/letsencrypt/live/git.96-fromsofia.net/privkey.pem
This certificate expires on 2023-04-28.
These files will be updated when the certificate renews.
Certbot has set up a scheduled task to automatically renew this certificate in the background.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
If you like Certbot, please consider supporting our work by:
 * Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
 * Donating to EFF:                    https://eff.org/donate-le
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
$ 

Next, similarly to how we uploaded our git repos and replicated them to EFS, we will follow the same process for our letsencrypt certificates. Obviously, we will amend the S3 upload path and the task arn for the datasync replication task.

$ cd letsencrypt/
$ ls 
cert.pem  chain.pem  fullchain.pem  privkey.pem  README
$ aws s3 cp $PWD s3://gitupload.96-fromsofia.net/letsencrypt/ --recursive
upload: ./cert.pem to s3://gitupload.96-fromsofia.net/letsencrypt/cert.pem
upload: ./README to s3://gitupload.96-fromsofia.net/letsencrypt/README
upload: ./fullchain.pem to s3://gitupload.96-fromsofia.net/letsencrypt/fullchain.pem
upload: ./chain.pem to s3://gitupload.96-fromsofia.net/letsencrypt/chain.pem
upload: ./privkey.pem to s3://gitupload.96-fromsofia.net/letsencrypt/privkey.pem
$ aws datasync list-tasks
TASKS   git-sync        AVAILABLE       arn:aws:datasync:eu-west-1:<ACCOUNT_ID>:task/task-<TASK_ID>
TASKS   letsencrypt-sync        AVAILABLE       arn:aws:datasync:eu-west-1:<ACCOUNT_ID<:task/task-<TASK_ID>
$ aws datasync start-task-execution --task-arn arn:aws:datasync:eu-west-1:<ACCOUNT_ID>:task/task-<TASK_ID>
arn:aws:datasync:eu-west-1:<ACCOUNT_ID>:task/task-<TASK_ID>/execution/exec-0aee45ab1bf865e19
$ 
...
$ aws datasync list-task-executions | tail -1
TASKEXECUTIONS	SUCCESS	arn:aws:datasync:eu-west-1:<ACCOUNT_ID>:task/task-<TASK_ID>/execution/exec-0aee45ab1bf865e19
$ 

Now if we check the domain, nothing would be different and the SSL issue would still appear in our browser regardless of us uploading the certificates to EFS. Reason is the SSL section in our vhost is commented out.

Edit the vhost and make sure the following lines are enabled and not commented out.

[ec2-user@ip-10-0-151-137 git-web]$ vim git.96-fromsofia.net.conf 
...
    SSLEngine On
    SSLProtocol                         all -SSLv2 -SSLv3
    SSLCipherSuite                      ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS
    SSLHonorCipherOrder                 on
    SSLOptions                          +StrictRequire
    SSLCertificateFile                  /etc/letsencrypt/live/git.96-fromsofia.net/cert.pem
    SSLCertificateKeyFile               /etc/letsencrypt/live/git.96-fromsofia.net/privkey.pem
    SSLCertificateChainFile             /etc/letsencrypt/live/git.96-fromsofia.net/fullchain.pem

If you would like apache to redirect HTTP to HTTPS, ensure the below is not commented out either:

    RewriteEngine On
    RewriteCond %{HTTPS} off
    RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}

Now using the same podman build, podman tag and podman push commands from before rebuild, re-tag and push back your newly updated gitweb image.

If you pushed the new image with the same tag as the old one, there is nothing you need to do besides restart the ECS service that is running your task.

If you chose a new tag when pushing the new image, make sure to edit the task definition image URI for the gitweb container.

Step 11 - Testing it all

Connections to our website should now be secure and the SSL setup successful:

Image

Use curl to test the website from our terminal.

$ curl -ILks https://git.96-fromsofia.net
HTTP/1.1 200 OK
Date: Sat, 28 Jan 2023 05:08:00 GMT
Server: Apache/2.4.53 (Red Hat Enterprise Linux) OpenSSL/3.0.1 mod_fcgid/2.3.9
Content-Type: text/html; charset=utf-8
$ curl -Iv https://git.96-fromsofia.net 2>&1 
*   Trying 34.246.205.187:443...
* TCP_NODELAY set
* Connected to git.96-fromsofia.net (34.246.205.187) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
*  subject: CN=git.96-fromsofia.net
*  start date: Jan 28 03:28:03 2023 GMT
*  expire date: Apr 28 03:28:02 2023 GMT
*  subjectAltName: host "git.96-fromsofia.net" matched cert's "git.96-fromsofia.net"
*  issuer: C=US; O=Let's Encrypt; CN=R3
*  SSL certificate verify ok.
> HEAD / HTTP/1.1
> Host: git.96-fromsofia.net
> User-Agent: curl/7.68.0
> Accept: */*
> 
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Date: Sat, 28 Jan 2023 05:08:04 GMT
Date: Sat, 28 Jan 2023 05:08:04 GMT
< Server: Apache/2.4.53 (Red Hat Enterprise Linux) OpenSSL/3.0.1 mod_fcgid/2.3.9
Server: Apache/2.4.53 (Red Hat Enterprise Linux) OpenSSL/3.0.1 mod_fcgid/2.3.9
< Content-Type: text/html; charset=utf-8
Content-Type: text/html; charset=utf-8

< 
* Connection #0 to host git.96-fromsofia.net left intact
$ 

Additionally, we will need to test the functionality of our git-server as well.

This should allow us to clone exiting repos, add files to them, make commits and push the new files to the repo. All of that should be happening through ssh, however the bash shell mustn’t accessible for the git user.

$ git clone git96:/srv/git/Docker.git
Cloning into 'Docker'...
Warning: the ECDSA host key for '[git.96-fromsofia.net]' differs from the key for the IP address '[34.246.205.187]'
Offending key for IP in .ssh/known_hosts:61
Matching host key in .ssh/known_hosts:64
Are you sure you want to continue connecting (yes/no)? yes
remote: Enumerating objects: 31, done.
remote: Counting objects: 100% (31/31), done.
remote: Compressing objects: 100% (27/27), done.
remote: Total 31 (delta 2), reused 31 (delta 2), pack-reused 0
Receiving objects: 100% (31/31), 22.10 KiB | 11.05 MiB/s, done.
Resolving deltas: 100% (2/2), done.
$ cd Docker/
$ mkdir certbot-dns/letsencrypt -p
$ cd certbot-dns/
$ vim Dockerfile
$ cd ../
$ git add certbot-dns/
$ git commit -am 'Add the certbot-dns dockerfile'
[master 2976006] Add the certbot-dns dockerfile
 1 file changed, 17 insertions(+)
 create mode 100644 certbot-dns/Dockerfile
$ git push origin master
Warning: the ECDSA host key for '[git.96-fromsofia.net]' differs from the key for the IP address '[34.246.205.187]'
Offending key for IP in .ssh/known_hosts:61
Matching host key in .ssh/known_hosts:64
Are you sure you want to continue connecting (yes/no)? yes
Enumerating objects: 5, done.
Counting objects: 100% (5/5), done.
Delta compression using up to 8 threads
Compressing objects: 100% (3/3), done.
Writing objects: 100% (4/4), 726 bytes | 726.00 KiB/s, done.
Total 4 (delta 1), reused 0 (delta 0)
To git96:/srv/git/Docker.git
   8cd32b0..2976006  master -> master
$ 

Considering you will have to specify an ssh key and a custom port and the git user for these operations, you may create ~/.ssh/config with the following contents:

Host git-server
    Hostname git.96-fromsofia.net
    Port <CUSTOM_TCP_PORT> 
    User git
    IdentityFile /path/to/private-key 

Then you can interact with your repository as such:

git clone git-server:/path/to/repo.git

The End

Congratulations on making it through this lengthy process of creating and coupling all these different resources together. I hope my explanations were clear enough to follow and the examples given here - helpful.

By this point you should have created all the infrastructure outlined in the first 2 paragraphs of this article and your app should have all the functionality described there.

If you’ve enjoyed this, make sure to go ahead and look at the Articles section. My personal projects you can find on my git server. If you have a question or want to get in touch, feel free to email me.

Thank you for reading and have a good night!