Traditional DevOps Flow

In a usual DevOps pipeline process, the idea of security comes quite late in the process, right before the deployment. Security testing at this point has a major issue. The problem is that in many cases apart from the small security bugs that show up, a lot of serious security vulnerabilities are identified. Now as our application is almost ready, changing code at this stage can get really tricky and complicated. It also becomes a bit time consuming as the developers should be careful that fixing the bugs should not affect the rest of the code that might be dependent on the part they are working on.

Shifting Left

Here’s where the concept of shifting left saves the day (probably saves a lot of days, lol). By shifting the security testing to the left of the process, the security testing is done at each phase instead of testing it all at once before deploying the application. As a result, the developers will know if any changes have to be made to the code much earlier and have not to worry much about the change affecting the rest of the application.


In a world where many depend so much on the open-source community, the security of the code or the packages we use should also be looked at. Considering our large and diverse open source community and the amount of open source code and packages available at every developer’s fingertips, it is not wise to manually check each package we use for security issues. 


  1. Install the Snyk utility using npm install -g snyk.
  2. Once installed you will need to authenticate with your Snyk account: snyk auth

For more detail on how to authenticate take a look at the CLI authentication section of the Snyk documentation.

Snyk CLI:

The first step to use snyk in cli is to authenticate.

snyk auth

This will open the browser and take you snyk login page where you can login and  authenticate the cli session.

In additon, we can also authenticate snyk using the API key.

snyk config set api=<api token>

This command can be used to authenticate directly by providing the API token. We can view the API token by:

  1. Clicking our name on the top right corner
  2. Selecting General Settings
  3. Click on “Click to show” to view the API token 

Snyk test

This will test the dependencies in a project for vulnerabilities. This is particularly useful when integrating with CI/CD pipelines as it returns a non-zero exit code thus causing the build to fail if vulnerabilities are detected. Various exit statuses and their meaning are given below:

0: success, no vulns found

1: action_needed, vulns found

2: failure, try to re-run command

3: failure, no supported projects detected

  • Run the command after going inside the project’s directory

snyk test

The vulnerabilities found will be categorized on the basis of the availability of a fix or a patch.

Snyk monitor

This creates a new project (a snapshot of the project) on the snyk website and keeps looking out for new vulnerabilities in the project.

  • Navigate to the project’s directory and run the command

snyk monitor

This will generate a url that takes us to the web dashboard.  

Snyk wizard

We’ll use the “wizard” to fix the found vulnerabilities. We can choose to upgrade, patch (if available) or skip an issue.

  • The wizard can be started by running the command. 

snyk wizard

Integrating snyk with CI/CD pipeline

Let’s start with a scenario where developers are publishing a docker image of an application. Their needs can be put into two parts:

  1. They want their latest code to be in the image that is published.
  2. They want to prevent the security issues that might come with the use of open-source packages and the base images used.

These can be solved by integrating snyk with the CI pipeline such that every time code is pushed to the github repository, snyk will test for vulnerabilities in the packages and the base  image used, during the build process. If any vulnerabilities are found, the build will fail and the built image won’t be pushed. 


We’ll be using AWS CodeBuild. CodeBuild saves us the work of having and managing our own build server. CodeBuild fully manages the building part as it compiles, runs tests and produces deployment ready software packages. 

Apart from the application source code and the Dockerfile, one more file called “buildspec.yml” is required. It is this file that gives CodeBuild the instructions to compile, run tests and build the application. 

Understanding buildspec

Let’s take a look at our buildspec.yml file: 

version: 0.2
  SNYK: snyk:snykapi
      docker: 19
      # Logging in to Amazon ECR      
– echo Logging in to Amazon ECR…
      – aws –version
      – $(aws ecr get-login –region $AWS_DEFAULT_REGION –no-include-email)
      – REPOSITORY_URI=<URI of ECR Repository>
      – IMAGE_TAG=${COMMIT_HASH:=latest}
      # Install NPM Dependencies
      – echo Installing NPM dependencies
      – npm install
      # Install Snyk      
– echo Install Snyk
      – curl -Lo ./snyk “”
      – chmod -R +x ./snyk
      # Snyk auth
      – ./snyk config set api=”$SNYK”
      – echo Build started on `date`
      – echo Building the Docker image…
      – docker build –build-arg snyk_auth_token=$SNYK -t $REPOSITORY_URI:latest .
      – echo Running Snyk
      – ./snyk test –docker $REPOSITORY_URI:latest –file=Dockerfile      
      – bash -c “if [ /”$CODEBUILD_BUILD_SUCCEEDING/” == /”0/” ]; then exit 1; fi”
      – echo Build completed on `date`
      – echo Pushing the Docker images…
      – docker push $REPOSITORY_URI:latest
      – docker push $REPOSITORY_URI:$IMAGE_TAG      
buildspec file
  • As we know, snyk needs to be authenticated. We have an option of hardcoding the API key but as we know, that is something that should not be done. What has been done here is that stored the API key in AWS Secrets Manager. We retrieve the key and store it in an environment variable. The syntax is as follows:

Variable: secret-name:secret-key

  • The  Pre_Build stage does three things:
    1. Login to AWS ECR and get the values for the variables.
    2. Install the NPM dependencies that are needed. 
    3. Download and authenticate Snyk
  • In the Build stage:
    1. We build the image by giving the Snyk API key as an argument. This will be explained when we are exploring the Dockerfile.
    1. Next, we run the Snyk test on the Dockerfile and the image we have built.
    2. Finally, we tag the image with the image_tag (commit hash)
  • Next in the Post_Build stage:
    1. The problem with CodeBuild is that, even if one stage fails, in our case, if the Build stage fails due to snyk finding any vulnerabilities, the next stage i.e. the Post_Build stage gets executed. So we check the CODEBUILD_BUILD_SUCCEEDING variable which returns 0 only if the previous stage is successful. This way the Post_Build stage won’t execute if the Build stage fails.
    2. So if the Build stage is successful, we push the image to Amazon ECR.

This file should be in the Github repository along with the other files.

Understanding dockerfile

Let’s now look at the Dockerfile:

FROM node:4.6

ARG snyk_auth_token

ENV SNYK_TOKEN=${snyk_auth_token}

COPY package*.json ./

RUN npm install

COPY . .

RUN curl -Lo ./snyk “”

RUN chmod -R +x ./snyk

RUN ./snyk test


CMD [“node”, “index.js”]

The test for any vulnerabilities in the base image will be done as we have used Snyk in the buildspec.yml file. But we still need to test for security issues in the dependencies used in the application. So we include commands to download and run a snyk test in the Dockerfile itself. Once the npm dependencies are installed, snyk will be downloaded and run to check for security issues.

Now with the base set, let’s get our hands dirty with CodeBuild.

Building with CodeBuild

  • After logging in to AWS, click on “Services” dropdown on the top left. Select “CodeBuild”
  • If it’s the first time using CodeBuild, click on “Create Project”.
  • Else click on “Create build project”.
  • In Project configuration, enter the project name and description.
  • We have to choose Github in source and connect to Github to use our Github repository  as source. We can either use OAuth or Github personal access token to connect. Once connected, we can choose the repository and the branch.
  • Enabling webhooks allows us to start the build based on a Github event. Selecting “Push” in “Event type” will trigger the build everytime code is pushed to the Github repository.
  • As far as the environment is concerned, we’ll be using an Ubuntu image managed by CodeBuild.
  • In the Build specification, select “Choose a buildspec file”. And in Buildspec name, leave it blank. If you have changed the name of the buildspec file, then type in that name.
  • As we’ll be doing a single build, we can leave the “Batch configuration” checkbox clean.
  • We’ll not be using any Artifacts. Hence we can select the “Type” as “No Artifacts”
  • Though optional, we’ll enable CloudWatch logs as they’ll come very handy while building.
  • Finally, click on “create build project”.

Let’s try it out and see if it works.

Without any vulnerabilities:

  • Pushing a change in Github. Build Succeeded and the image was pushed:
  • The image below is a part of the Cloudwatch log:

With Vulnerabilities:

  • Pushing a change to Github. Build failed and image not pushed to ECR.

Thank you for reading! – Vishal Pranav and Setu Parimi

Sign up for the blog directly here.

Check out our professional services here.

Feedback is welcome! For professional services, fan mail, hate mail, or whatever else, contact [email protected]


Leave a Reply

%d bloggers like this: