ECR and TryHackMe! Advent of Cyber – Day 19!

Today’s exercise is based on AWS Elastic Container Registry (ECR). ECR is AWS fully-managed Container Image Registry Service hosted on the cloud. Well, it is not just you who is wondering what are Containers, Docker, Docker Images, etc.?

Let’s understand all these fuzzy terms one by one:

Containers: Just like Virtual Machines, Containers are a form of Virtualization Mechanism where virtualization is at the operating system level. Each container is tied up together with its software, libraries, configuration, and other dependencies files that have made the developer’s life easy in the deployment phase software development lifecycle.

Earlier deployment of software from one computing environment to another was a constant challenge. Developers frequently move software from dev to test to prod for testing and deployment purposes. To make the software run across all the environments, they constantly have to manage the IT Infrastructure of all these environments. Incompatibilities of software versions, libraries, and configuration often cause problems during deployment and software doesn’t work as it was in the previous environment. To solve this problem of incompatible dependencies, containerization was introduced where it directly uses an instance of the operating system kernel and binds all software dependencies and libraries into an isolated process called containers. So, the container not just contains the application but everything it needs to run that application.

Dockers: It is a platform as a service that provides OS-level virtualization to deliver software in packages called Containers. By far, Docker is the best-known Containerization technology and lead product in this category. When someone talks about Dockers, they are often talking about the multiple technologies that work together in Containerization.

Docker API: a local communication interface on a configured Linux machine, with standardized commands, used to communicate with a Docker Daemon

Docker Daemon: a process that runs on your machine (the Docker daemon), to interact with container components such as images, data volumes, and other container artifacts.

Docker Container Images Format: .tar file or as per OCI Image Specification.

Docker Image: It is an immutable (unchangeable) file that contains the source code, libraries, dependencies, tools, and other files needed for an application to run. Since they are read-only files, therefore they are sometimes referred to as Snapshots. They represent an application and its virtual environment at a specific point in time. This consistency is one of the great features of Docker. It allows developers to test and experiment with software in stable, uniform conditions. When Docker Image runs, it creates a run-time environment for your application to run called Docker Containers which binds together all dependencies as an isolated process to have a standardization computing environment.

Since images are like templates, you cannot start or run them. To use this template as a base to build a container. A container is a running image. Once you create a container, it adds a writable layer on top of the immutable image, meaning you can now modify it. The image base on which you create a container exists separately and cannot be altered. When you run a containerized environment, you essentially create a read-write copy of that filesystem (docker image) inside the container. This adds a container layer which allows modifications of the entire copy of the image.

Docker file: It is a script of instructions that define how to build a specific Docker image. The file automatically executes the outlined commands and creates a Docker image.

Docker Build: It is the command for creating an image from a Dockerfile. For e.g. – To create a container layer from an image, use the command docker create

Now are familiar with Docker terminologies, let’s understand what is AWS ECR?

Amazon Elastic Container Registry (Amazon ECR) is an AWS-managed container image registry service that supports private repositories with resource-based permissions using AWS IAM. This is so that specified users or Amazon EC2 instances can access your container repositories and images. You can use your preferred CLI to push, pull, and manage Docker images, Open Container Initiative (OCI) images, and OCI compatible artifacts.

Day 19 Challenge and my learnings:

Install AWS CLI and how to check if there is any default container image stored on the system.

I have followed these instructions to install CLI in the Linux-based attack box. Used ‘docker images‘ command to see the container images that are stored by default on your AttackBox:

Docker containers are stored in “repositories”, which are a reference to file mappings the Docker daemon knows how to reach, which include the container .tar files. Each image in a repository will include an image tag, and images can be referenced using either their tag or Image ID.

For example:

remnux/ciphy:latest or ec11b47184f6

Pull the docker image:

Since Grinch Enterprises attack infrastructure was likely to be an Elastic Container Registry that is publicly accessible; we used the following command to pull this docker image:

docker pull public.ecr.aws/h0w1j9u3/grinch-aoc:latest

Run containers and Interact with it using shell:

docker run -it public.ecr.aws/h0w1j9u3/grinch-aoc:latest

which will open a shell inside the container image, as indicated by the $. Once inside the container, we can do a little reconnaissance: ls -la

Check environment variable config

printenv

Save Container Image as a .tar file

docker save -o aoc.tar public.ecr.aws/h0w1j9u3/grinch-aoc:latest

Unzip .tar file

tar -xvf aoc.tar (-v is for verbose output)
Use “jq” tool to make the output readable. cat manifest.json | jq

Being a complete beginner in Containers, this lab has given me a practical understanding of containers, why do we need to use them, how it has overcome the challenges of the software development lifecycle. Knowledge of the basic commands to interact with containers such as pull, save, etc.

Finding CTF to solve this lab approximately took me 2hrs. Although to understand its terminologies, I had to use google and other resources on the internet. Overall, it was a good lab with an instructional video, in case if you get stuck anywhere.

Lab solution