96-fromsofia.net


Git server with gitweb interface running on ECS

Overview

In this article, we will be setting up our first ECS cluster and creating the required infrastructure to run our application. The application in this example will be a git server which will support:

  • cloning and pushing to existing repositories with ssh-key authentication
  • creating new repositories on the git server with s3 and datasync
  • visualizing our git repository with the web UI gitweb

There are quite a few resources we will need to set up - we’ll be doing this manually. I am planning to release cloud formation templates for such large builds in the foreseeable future but I also see a benefit in knowing how to perform this process manually.

Infrastructure Overview

The infrastructure will consist of the following:

  • ECS cluster to run our task
  • Task definition to run our containers within the cluster
  • Task service to ensure our task is running
  • ECR repository for our docker images
  • Elastic IP for our EC2 cluster instance
  • EC2 launch template for our ASG
  • An ASG with a fixed capacity working as the compute resource behind the ECS cluster
  • An EFS filesystem for highly available and resilient persistent storage in our containers
  • Security groups for the EC2 instance and EFS file system
  • IAM role for the EC2 cluster instance
  • IAM user for the aws-cli
  • S3 bucket for creating new git repositories and uploading new letsencrypt SSL certificates
  • DataSync distribution for replication between S3 and EFS

Prerequisites

This article assumes the following components are already present and won’t be covered here:

  • AWS account with an IAM user having the necessary permissions for creating the above infrastructure
  • Either a domain or subdomain for your git server
  • An AWS VPC with a public subnet accessible from the internet

Step 1 - Setting up the storage services and replication

As explained, we will be using S3 and EFS from AWS’ storage portfolio.

EFS will be used to mount some directories into our containers for persistent storage. My reasoning behind choosing EFS rather than the standard EBS volumes is the ability to simultaneously mount the same directory across different instances and the high availability and automatic backups that EFS comes with. This means I don’t have to worry about backups for the EC2 instances’ EBS volumes which makes the EC2 instances easily replaceable compute resources - this in my opinion is a perfect fit for container workloads.

Type EFS in the search box from your AWS console and select the service.

Click on Create file system, add a name for the filesystem and select your VPC.

create-efs01

Click on create. If you go over to the Network tab from the EFS view, you will see your file systems are being created.

create-efs02

This will take some time but you can proceed with creating the other resources and we will come back to EFS a bit later.

The S3 bucket will be used for 2 things: uploading new repositories (not cloning/pushing) to the git container and uploading new SSL certificates to the gitweb container.

This way, I can increase my containers’ security by disabling the tty on the git container and allowing ssh connections to only use the git-shell, meaning ssh connections will refuse to provide a shell and scp won’t work. As per the gitweb container, that means I wouldn’t have to be running an ssh server at all on it and the only thing it has to expose is the website via ports 80 and 443.

Head over to the S3 console and create a new bucket with all the default settings, ex. public access blocked, ACLs disabled etc.

Step 2 - Creating our IAM resources

We will need to manually create 2 resources in the IAM console.

The IAM role will be used by the EC2 cluster instance and will have the standard execution managed policy for ECS instances and a customer policy we will create as well. The customer policy is required, so our instance has the permission to assign an elastic IP to itself. Later, when we are creating our launch template and auto-scaling policy, you will see why this is important.

First thing however we will need to make sure is having an Elastic IP registered to the region where our EC2 cluster instance will run.

In the EC2 console on the left side panel under Network and Security go to the Elastic IP dashboard.

elasticip01

Once you have the Elastic IP allocated, you will be able to see it’s allocation-id. Note this down as it will be required in the next step.

elasticip02

For the policy go to the IAM console and under Access Management select Policies.

create-policy01

Click on Create a policy and choose the JSON option.

create-policy02

Paste the following IAM policy into the editor, making sure you replace the <ACCOUNT_ID> and <ALLOCATION_ID> appropriately:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "EC2AssociateEIP",
      "Effect": "Allow",
      "Action": "ec2:AssociateAddress",
      "Resource": [
        "arn:aws:ec2:*:<ACCOUNT_ID>:elastic-ip/<ALLOCATION_ID>",
        "arn:aws:ec2:*:<ACCOUNT_ID>:instance/*"
      ]
    }
  ]
}

Review your changes and save the policy.

create-policy03

Now for the role, still from the IAM console select Roles from the side menu and choose Create a role.

For the Trusted entity type choose AWS Service and under Common use cases select EC2 and proceed.

create-iamrole00

From Add permissions, look for the policy you just created and select it.

create-iamrole01

Before proceeding, look for the AmazonEC2ContainerServiceEC2Role managed role and make sure it is selected as well.

create-iamrole02

Continue and on the following screen enter a name for your role and a brief description of it’s intended use. Click on Create role.

create-iamrole03

An IAM user with access to datasync and s3 will be required too so we can upload data from our local machine over to the bucket and also execute the datasync replication task remotely from our workstation. Additionally, for convenience we will want to allow this user access to the ECR repositories so we can use it to push our docker images.

Again from the IAM console’s side panel select the Users option. Access to the management console will not be required.

Enter a username and create your user.

create-iam-user01

Click on your newly created user and select the Security Credentials tab.

create-iam-user02

Under Access Keys click on Create an Access Key. For our setup the top option would suffice.

create-iam-user03

When you click on Next your Access key will be displayed alongside your Secret Access Key. The Secret Access key is shown only once at this screen and can never be retrieved again. If you don’t write it down now, you will need to create a new Access and Secret key combination and note down their values.

Finally, go to the Permissions tab and ensure you have added the following 3 managed policies:

  • AmazonEC2ContainerRegistryFullAccess
  • AmazonS3FullAccess
  • AWSDataSyncFullAccess
create-iam-user04

Step 3 - Creating Security Groups

We will need 2 security groups for this setup. One will be applied to our EC2 cluster instance and allowing inbound traffic for SSH, HTTP/S and a custom TCP port.

The other group will be applied to the EFS file system and will naturally allow NFS traffic, but from the EC2 cluster instance’s security group only.

Go over to the EC2 console and under Network and Security click on Security Groups.

create-an-sg01

Enter a name and description for the security group and make sure to select the VPC you are planning to use with EC2 cluster instance.

create-an-sg02

The inbound Rules for the EC2 cluster instance are as follows:

create-an-sg03
  • For SSH access I have chosen to allow connections originating only from the VPC’s private CIDR range
  • The custom TCP port will be bound to port 22 on the git container. The port number should be chosen randomly here. This will allow us to use SSH for git clone and push commands
  • The HTTP and HTTPS rules will be used by the gitweb container

Follow the same process for the second security group and add only one inbound rule.

create-an-sg04

The Type should be set to NFS. Set the source to custom and enter the first security groups ID. This will ensure all NFS connections are allowed from whatever resources are part of the EC2 cluster instance’s security group.

Now that the EFS security group has been created, head over to the EFS console and select your file system.

Under the Network tab click the button to modify your file systems. From the security group dropdown, select the EFS security group we just created.

finalize-efs01

Make sure to repeat the process for each of the file systems listed here.

finalize-efs02

Step 4 - Create the DataSync replication tasks

Datasync will be playing an important role because it will provide a way for us to replicate the S3 data on demand over to the EFS file system.

To create the datasync tasks in your AWS account, head over to the Datasync console. On the right side Getting started menu, chose the Between AWS services dropdown option.

datasync01

On the next screen choose new location and specify the S3 bucket created earlier.

I will be creating 2 datasync tasks, one that pushes everything uploaded to s3://bucket/git/ folder to a specific path in EFS and another one for s3://bucket/letsencrypt.

The IAM role should be auto-generated at this same screen if it doesn’t already exist.

datasync02

For the destination target again choose a new location and choose the EFS service. Select your region and file system ID.

For the mount path you can technically select whatever path you want as long as you also create it in the EFS file system. We don’t need the path to exist to create the replication task but we have to make sure we create it later before launching the replication task.

Finally, select the security group. Here you should select the EC2 cluster instance security group and NOT the EFS one. Remember, datasync will want to access and write to the EFS, and we granted access to EFS from the EC2 cluster instance’s security group.

datasync03

On the next screen you can modify any of the existing replication settings. I personally chose to stick with the defaults and only verify the transferred data’s integrity and transfer only changed data from the source to reduce costs from redundant network traffic.

datasync04

A cloud watch group will need to be selected. If you don’t have one existing, you can choose to auto-generate it here.

datasync05

Once your first replication task has been created, follow the same process to create another one for the letsencrypt SSL certificates.

The only difference should be the bucket: s3://bucket/letsencrypt/ and EFS: /git.96-fromsofia.net/gitweb/letsencrypt/git.96-fromsofia.net/ paths.

Again the EFS path can be set to whatever you like as long as you make sure to create it.

datasync06

Step 5 - Creating our EC2 infrastructure

In this step, we create our EC2 launch template that our ASG will use to spawn instances. This ASG in return will serve as the ECS cluster’s capacity provider.

To create a launch template go to the EC2 console and under Instances select Launch Templates.

create-launchtemplate01

Click to create a launch template and you will be taken to the template configuration section. The first thing we will need is to select an AMI.

You can choose any AMI and install the ECS agent on your instance manually or via a user-data script, however I find using the amazon ecs optimized AMI a much more convenient option.

To choose the ecs optimized AMI, type ecs in the search box for the AMIs and hit enter. You will be taken to Marketplace where you can select the latest amazon linux ecs optimized AMI, select it and you will be taken back to the launch template screen.

create-launchtemplate02

Next, if you are planning to have access to your Docker host or in this case ECS container instance you will probably also want to specify a private key for the instance. Make sure you have one generated and select it.

Following should be selecting your subnet and security group. The security group should be the one you created for the EC2 cluster instance.

Expand the Advanced network configuration section and make sure that Auto-assign public IPv4 is Enabled! This is required so we can get a public IPv4 DNS which ECS needs to register our instance.

create-launchtemplate03

Scroll all the way to the bottom of the page and expand the Advanced details section.

Under IAM instance profile, make sure you have selected the IAM role we created in Step 2.

Under DNS Hostname ensure ‘Enable resource based IPv4 (A record) DNS requests’ is enabled

create-launchtemplate04

Now, at the very bottom of the page under User data paste the script and amend the following values:

  • <CLUSTER_NAME>: Set this to the name of your ecs cluster
  • <ALLOCATION_ID>: This is the same allocation ID for the elastic IP we needed earlier
#!/bin/bash
# Update the OS and install the unzip package
yum -y update
yum install -y unzip

# Add your cluster name to the ECS_CLUSTER variable so the ecs daemon can register your instance
echo ECS_CLUSTER="<CLUSTER_NAME>" >> /etc/ecs/ecs.config

# Download and install the AWS CLI
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
./aws/install

# Allocate your elastic IP with the EC2 instance
/usr/local/bin/aws ec2 associate-address --instance-id `curl http://169.254.169.254/latest/meta-data/instance-id` --allocation-id <ALLOCATION_ID>

# Restart the ecs daemon so your instance registers with the cluster
systemctl restart ecs

With all of our setting applied, we can now save your launch template and head over to the ASG dashboard

create-asg00

Click on Create an auto scaling group and enter the name of the ASG in the pop-up screen. Make sure to also select the launch template you’ve just created.

create-asg01

In the next step, make sure to select your VPC as well as at least 2 AZs. The reason behind this is in case of an AZ maintenance or the unlikely cause of a disaster your EC2 instance can be span up in another AZ ensuring availability of your containers.

create-asg02

The next step heavily depends on what you are willing to spend.

I am trying to keep costs minimized with this project and I am the only person working with this git server. The gitweb interface is mostly up there as part of my visual portfolio and sees very little traffic. Although I do treat this as a production system, it is essentially a personal website. That being said, I am not creating a load balancer and in this setup I will leave the defaults as they are.

create-asg03

On the next screen is the desired capacity. If you were to use a load balancer on the previous step, then you can remove the user data script altogether from your launch template as the public IP will be on the load balancer. In my case however, I am setting all the min max and desired instance count to 1 to ensure 1 instance only is always running.

create-asg04

Save your settings and create the ASG. In a few minutes, you will see an instance being started in the EC2 console. This is your EC2 cluster instance.

Step 6 - Create the EFS file system structure

Now for this step there are different approaches you can take and depending on what access you have to your EC2 cluster instance you may be able to get away only with it. I personally have allowed SSH access to my EC2 instance only from the internal network in the VPC and I have other public instances serving as bastions.

You will need to create some directories within the EFS filesystem, which means you will need access to an EC2 instance which is associated with the EC2 cluster instance security group and subnet.

When you click on the EFS filesystem you created earlier in the AWS console, you will see a button Attach. Click on it and you will be provided with commands you can use internally in your VPC to mount the EFS file system.

Copy the command given to you and mount the EFS partition in your EC2 instance:

mkdir /mnt/efs
sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport 10.55.10.125:/ /mnt/efs

df -hTP /mnt/efs

mkdir -p /mnt/efs/git.96-fromsofia.net/{git,gitweb/{custom,letsencrypt/git96-fromsofia.net,log}}/

chown git:git -R /mnt/efs/git.96-fromsofia.net/git/

Now that the EFS file system has been mounted, you can copy your static data into the ../gitweb/custom folder.

The static data in my case consists of a favicon.ico, a git-logo.png and a home_text.html.

These are optional gitweb values to amend the favicon, the logo displayed in the right top corner and the greeter message on your default projects view.

scp -i <SSH_KEY> * ec2-user@54.194.64.190:~/

ssh -i <SSH_KEY> ec2-user@54.194.64.190

sudo cp favicon.ico git-logo.png home_text.html /mnt/efs/git.96-fromsofia.net/gitweb/custom/

Step 7 - Registry repository and containers

In this step, we will get to build our Docker containers and create the ECR registry repository. The repository will be hosting our docker images and is where the ECS cluster pulls them from to run our tasks.

From the search box in the AWS console, type ECR and you should see the container registry service appear. Click on it. Select Create repository.

One thing that is quite important here is to make sure your Repository name matches the exact name of your image. For example, if your image is called my-image make sure to enter my-image in the Repository name box here.

Ensure Private is selected under Visibility settings and create the repository.

create-ecr01

Create a repository for each image. At the end, they will both appear in the console.

create-ecr02

Setting up AWS CLI

Remember that IAM user we created in step 2 where you had to note down the secret access key - now, you will need it.

The next step is quickly setting up the AWS CLI and our development environment. This can be an on-prem VM, your Linux laptop or another EC2 instance as long as the machine can connect to the internet.

Use the following commands to download, install and configure the AWS cli:

aws configure

Git Server Image

The git-server image will consist of 2 files: a Dockerfile and a public ssh-key. The image will install the git and openssh-server packages, create a git user with access to the git-shell, make the git repositories mount target and run the SSH server.

ssh-key.pub (example):

ssh-rsa <SSH-KEY> <USERNAME>

Dockerfile:

FROM ubi9:latest
RUN yum install -y git openssh-server
RUN ssh-keygen -A

# Setup the git user and create the git repo mount target
RUN useradd -s `which git-shell` -u 3091 git
RUN mkdir -p /srv/git

# Add your public key for passwordless ssh authentication with git interactions
RUN mkdir /home/git/.ssh && chmod 700 /home/git/.ssh
ADD ./ssh-key.pub /home/git/.ssh/authorized_keys
RUN chmod 600 /home/git/.ssh/authorized_keys && chown git:git -R /home/git/.ssh

# Secure git-shell against: CVE-2017-8386
RUN sed -i '1s/^/no-pty /g' /home/git/.ssh/authorized_keys

# Expose port 22 and run the ssh daemon
EXPOSE 22
CMD ["/usr/sbin/sshd","-D"]

To build your container run:

podman build -t git-server:01 .

Now go to the ECR dashboard in the AWS console, click on one of your repositories and choose select the View push commands button.

Run the first one that has the ‘docker login’ part in it to authenticate to the ECR repository:

aws ecr get-login-password --region eu-west-1 | podman login --username AWS --password-stdin <ACCOUNT_ID>.dkr.ecr.eu-west-1.amazonaws.com

Next, tag the image and push it:

podman tag git-server:01 <ACCOUNT_ID>.dkr.ecr.eu-west-1.amazonaws.com/git-server:01
podman push <ACCOUNT_ID>.dkr.ecr.eu-west-1.amazonaws.com/git-server:01

Gitweb Image

For the gitweb image, you have to use the exact same approach for building and pushing it. The files associated with this container are 3: a Dockerfile, gitweb.conf for the configuration of the gitweb interface and the git.domain.com.conf apache virtual host file.

gitweb.conf:

our $projectroot = "/srv/git/";
our $logo = "custom/git-logo.png";
our $logo_url = "https://96-fromsofia.net";
our $favicon = "custom/favicon.ico";
our $site_name = "96-fromsofia.net Git Repository";
our $home_link_str = "96-fromsofia.net >> Git Projects";
our $home_text = "custom/home_text.html";
our $omit_age_column = "true";
our $omit_owner = "true";
our $my_url = "http://git.96-fromsofia.net/gitweb.cgi";
our $base_url = "http://git.96-fromsofia.net/";
our $snapshot = "false";
our $timed = "true";

git.domain.com.conf - see full article for complete Apache configuration with SSL settings.

Dockerfile:

FROM ubi9:latest
RUN yum install -y git make diffutils httpd php php-cli mod_fcgid perl-FCGI perl-filetest perl-Time-HiRes mod_ssl
RUN yum install -y https://dl.rockylinux.org/pub/rocky/9/AppStream/x86_64/os/Packages/p/perl-CGI-4.51-5.el9.noarch.rpm
RUN yum install -y https://rpmfind.net/linux/epel/9/Everything/x86_64/Packages/p/perl-FreezeThaw-0.5001-37.el9.noarch.rpm
RUN yum install -y http://repo.iotti.biz/CentOS/9/noarch/perl-CGI-Session-4.48-26.el9.lux.noarch.rpm

# Configure the apache web server
RUN rm -f /etc/httpd/conf.d/*.conf
RUN chown apache:apache -R /etc/httpd/logs/
RUN openssl dhparam -out /etc/httpd/dh4096.pem 4096
RUN echo 'SSLOpenSSLConfCmd DHParameters /etc/httpd/dh4096.pem' >> /etc/httpd/conf.modules.d/00-ssl.conf
RUN sed -i 's/Listen 80/Listen 80\nListen 443/g' /etc/httpd/conf/httpd.conf

ADD git.96-fromsofia.net.conf /etc/httpd/conf.d/
ADD gitweb.conf /etc/

# Configure the git user and build the gitweb script
RUN useradd git
RUN mkdir /srv/git
RUN chown git:git /srv/git
USER git
WORKDIR /home/git
RUN git clone git://git.kernel.org/pub/scm/git/git.git
WORKDIR git/
RUN make GITWEB_PROJECTROOT="/srv/git" prefix=/usr gitweb
USER root
RUN cp -Rf gitweb /var/www/
RUN mkdir /var/www/gitweb/custom
RUN rm -rf /home/git/git

# Expose HTTP/S and run the apache web server
EXPOSE 80 443
CMD ["/usr/sbin/httpd","-D","FOREGROUND"]

Step 8 - Creating the ECS infrastructure

Our ECS infrastructure consists of a lot of things that we already created like the EC2 ASG and the EFS filesystem, however the actual container orchestration consists of 3 parts within ECS:

  • The cluster, which takes care of ensuring your services are running and allocated to their respective capacity providers
  • The services determine in what manner your tasks are started and how they are maintained
  • The tasks define which container or group of containers to run and what options to start them with

We will begin by creating our ECS cluster. From the AWS ECS console’s side panel, click on Clusters. Select Create a cluster.

Enter the name of your cluster, select the correct VPC and choose at least 2 subnets. Proceed.

create-cluster02

On the next screen select Amazon EC2 Instances as the capacity provider and from the dropdown menu select the ASG we created earlier. Create the cluster.

create-cluster03

Now if you go into the newly created cluster and click on Infrastructure you will see the ASG listed under capacity providers but most likely there will 0 instances. This is because our EC2 cluster instance was already running before we created the cluster.

You can take one of 2 choices here: either log in to your EC2 cluster instance and restart the ECS agent (systemctl restart ecs) OR terminate the running EC2 cluster instance and have the ASG spawn a new one.

Wait until you can see the EC2 cluster instance within the Infrastructure section and proceed with creating the task definition.

Again from the side panel in the ECS console, this time select Task definitions.

create-task-definition00

Under Container one, populate the Name, Image URI and port mappings.

Make sure the Image URI includes the exact tag as well because if you don’t have a latest tag this will fail.

Correct syntax should be: aws-repository/your_image:your_tag

create-task-definition01

For the second container define your gitweb build.

create-task-definition02

On the next page under App environment, I’ve specified the following values. The container sizes are absolutely optional and can be left blank.

create-task-definition04

Now under Volumes, click Add a volume and create a new volume for each of your 4 EFS directories. The 4 volumes I created look as such:

Name: git-efs
Root directory: /git.96-fromsofia.net/git
Name: gitweb-custom
Root directory: /git.96-fromsofia.net/gitweb/custom
Name: gitweb-log
Root directory: /git.96-fromsofia.net/gitweb/log
Name: gitweb-letsencrypt
Root directory: /git.96-fromsofia.net/gitweb/letsencrypt/git.96-fromsofia.net
create-task-definition06

Below the volumes, you should have an option to add mount points. These mount points represent the mount targets within your containers for the volumes you just created.

  • git-efs should be mounted on /srv/git on both containers
  • gitweb-custom should be mounted on the gitweb container under /var/www/gitweb/custom
  • gitweb-logs should be mounted on the gitweb container under /var/log/
  • gitweb-letsencrypt should be mounted on the gitweb container under /etc/letsencrypt/live/git.96-fromsofia.net The final result should look similar to the following:
create-task-definition07

After your task has been created you can now create the final piece of infrastructure and this will be your ECS service.

A task on its own is only that. If we want to run it, we need to manually do so. If we want to stop it - again, we need to manually terminate it. This is where Services play a crucial role, they have a desired number of tasks that are started and kept running. If the underlying EC2 instance fails, your ASG will start a new one and the ECS service will ensure your task has been started.

Go into the cluster you created earlier and click on the Services tab. Select create a service.

In the environment section, select Launch type and set it to EC2.

create-service01

Under Deployment configuration, select Service and choose your task. As the Service Type select Daemon.

create-service02

For the additional details, I’ve kept the defaults as they meet the requirements of my use case.

create-service03

Step 9 - Upload your content to EFS

As we created our service, our task should now be running, however it may have either failed or the gitweb interface will be showing a 404 - no projects found. This is because we only uploaded our static data under the custom EFS folder.

Let’s upload our git repositories:

git clone old.git.96-fromsofia.net:/srv/git/Docker
git clone --bare Docker Docker.git
find Docker.git/ -type d -empty -exec touch {}/.empty \;

The last line is important if you have cloned a bare repository, as S3 will not upload directories with no files in them.

Go through the same process for every other git repository you want to clone. You can then use the aws s3 cp command to upload your repos to the S3 bucket:

aws s3 cp $PWD s3://gitupload.96-fromsofia.net/git/ --recursive

Now we will start the DataSync replication task:

aws datasync list-tasks
aws datasync start-task-execution --task-arn arn:aws:datasync:eu-west-1:<ACCOUNT_ID>:task/task-<TASK_ID>
watch -n 30 aws datasync list-task-executions

Viola, go back to the ECS console and see if your task is still running. If not restart the service and wait for the task to come back online.

Make sure that the A record for git.domain.com is pointing to the elastic IP we provisioned earlier.

If you navigate to your gitweb domain you should see the gitweb script running, your favicon, git-log and greeting text are all found and your git repositories listed.

test-site01

Looking good, right? However, you should notice that your browser has an unhappy looking red padlock next to the address bar indicating some sort of security issues. This makes sense considering our vhost listens for 443 connections but has no SSL certificate.

Step 10 - Adding an SSL certificate

This next step is optional, however considering how easy it is to set up SSL certificates with letsencrypt I issues one for git.96-fromsofia.net and will walk you through the process of acquiring one yourself.

I’ve created a Containerfile you can build and run locally to get the certbot-dns validator started:

FROM ubi9:latest
RUN yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm
RUN yum install wget certbot -y
RUN wget https://github.com/joohoi/acme-dns-certbot-joohoi/raw/master/acme-dns-auth.py

RUN sed -i 's/python/python3/g' acme-dns-auth.py
RUN chmod +x acme-dns-auth.py
RUN mv acme-dns-auth.py /etc/letsencrypt/

RUN mkdir /letsencrypt
RUN echo -e 'certbot certonly --manual --manual-auth-hook /etc/letsencrypt/acme-dns-auth.py --preferred-challenges dns --debug-challenges --email mailbox@domain.com --agree-tos --no-eff-email -d git.example.com -d www.git.example.com && cp -aL /etc/letsencrypt/live/git.example.com/* /letsencrypt' > /.startup.sh

RUN chmod +x /.startup.sh
CMD ["/bin/bash","/.startup.sh"]

Build the container:

podman build -t certbot:01 .

Make sure there is a letsencrypt directory in your current location and run the container:

podman run -it -v ./letsencrypt/:/letsencrypt:Z certbot:01

Follow the prompts to add the CNAME records to your DNS zone. After successful validation, upload the certificates to S3 and use DataSync to replicate them to EFS:

cd letsencrypt/
aws s3 cp $PWD s3://gitupload.96-fromsofia.net/letsencrypt/ --recursive
aws datasync start-task-execution --task-arn arn:aws:datasync:eu-west-1:<ACCOUNT_ID>:task/task-<TASK_ID>

Now uncomment the SSL lines in your Apache configuration and rebuild the gitweb image with the updated configuration.

Step 11 - Testing it all

Connections to our website should now be secure and the SSL setup successful:

test-site03

Use curl to test the website from our terminal:

curl -ILks https://git.96-fromsofia.net
curl -Iv https://git.96-fromsofia.net 2>&1

Additionally, test the functionality of our git-server.

This should allow us to clone exiting repos, add files to them, make commits and push the new files to the repo. All of that should be happening through ssh, however the bash shell mustn’t accessible for the git user.

$ git clone git96:/srv/git/Docker.git
Cloning into 'Docker'...
Warning: the ECDSA host key for '[git.96-fromsofia.net]' differs from the key for the IP address '[34.246.205.187]'
Offending key for IP in .ssh/known_hosts:61
Matching host key in .ssh/known_hosts:64
Are you sure you want to continue connecting (yes/no)? yes
remote: Enumerating objects: 31, done.
remote: Counting objects: 100% (31/31), done.
remote: Compressing objects: 100% (27/27), done.
remote: Total 31 (delta 2), reused 31 (delta 2), pack-reused 0
Receiving objects: 100% (31/31), 22.10 KiB | 11.05 MiB/s, done.
Resolving deltas: 100% (2/2), done.
$ cd Docker/
$ mkdir certbot-dns/letsencrypt -p
$ cd certbot-dns/
$ vim Dockerfile
$ cd ../
$ git add certbot-dns/
$ git commit -am 'Add the certbot-dns dockerfile'
[master 2976006] Add the certbot-dns dockerfile
 1 file changed, 17 insertions(+)
 create mode 100644 certbot-dns/Dockerfile
$ git push origin master
Warning: the ECDSA host key for '[git.96-fromsofia.net]' differs from the key for the IP address '[34.246.205.187]'
Offending key for IP in .ssh/known_hosts:61
Matching host key in .ssh/known_hosts:64
Are you sure you want to continue connecting (yes/no)? yes
Enumerating objects: 5, done.
Counting objects: 100% (5/5), done.
Delta compression using up to 8 threads
Compressing objects: 100% (3/3), done.
Writing objects: 100% (4/4), 726 bytes | 726.00 KiB/s, done.
Total 4 (delta 1), reused 0 (delta 0)
To git96:/srv/git/Docker.git
   8cd32b0..2976006  master -> master
$ 

Considering you will have to specify an ssh key and a custom port and the git user for these operations, you may create ~/.ssh/config with the following contents:

Host git-server
    Hostname git.96-fromsofia.net
    Port <CUSTOM_TCP_PORT> 
    User git
    IdentityFile /path/to/private-key 

Then you can interact with your repository as such:

git clone git-server:/path/to/repo.git

Conclusion

Congratulations on making it through this lengthy process of creating and coupling all these different resources together. I hope my explanations were clear enough to follow and the examples given here - helpful.

By this point you should have created all the infrastructure outlined in the first 2 paragraphs of this article and your app should have all the functionality described there.

If you’ve enjoyed this, make sure to go ahead and look at the Articles section.

My personal projects you can find on my git server.

If you have a question or want to get in touch, feel free to email me.

Thank you for reading and have a good night!