ACM.46 DRY. Don’t Repeat Yourself.
This is a continuation of my series on Automating Cybersecurity Metrics.
Before I get started on this next blog post a small request. If you see plagiarism of my posts or anyone else’s, please do an author a favor and report it:
I’ve written about DRY or Don’t Repeat Yourself in terms of software programming before. It’s one of my favorite concepts.
I am writing this framework I’m working on as if it is a large organization with different departments. Each department may have their own code repositories. In addition, the organization may have some code it wants to share through the company, kind of like an internal open source code repository. Perhaps there is and architecture or DevOps team that has a set of functions that might be useful throughout the company.
When I worked at a bank we ran batch jobs and all the batch jobs used a common library so they worked in a similar fashion. That batch job library was in its own repository. Changes to this critical component could cause a lot of things to break so they were handled with care.
Similarly, our applications relied on Java and a corresponding open-source jar file. A project to update the version of Java to fix a vulnerability for the entire set of batch jobs used in the company took about a year because each critical banking application and the central component used by all of them needed to be re-tested to ensure nothing broke and calculations weren’t affected by the upgrade.
Relying on a centralized common set of code can cause issues when it comes to updating that centralized code. On the other hand, our common batch job library was a good design and it helped all the batch jobs run in a consistent manner. We had a scheduling tool that could run all the batch jobs and the operations team understood how to manage them since they pretty much all worked the same way. It was possible to have different batch jobs using different versions of the core library throughout the upgrade process so applications could be upgraded in priority order.
The trade-offs between shared and one-off code can be tricky to evaluate. However, planning out how you will manage upgrades can help overcome any of the downsides of using common code where you can. GitHub makes things a bit easier because a team can use code from a repository as is, use the code from a specific branch, or fork the code and then propose updates back to the primary repo.
When we were having challenges keeping up with developer demand for networking at Capital One I proposed this idea of allowing people to fork the code and propose changes. I think it was a good idea but the complexity of reviewing networking via code as opposed to looking at a network design was not feasible. We still needed other mechanisms for reviewing the changes.
You will likely want a better way to validate the code than looking at the code itself in many cases. Although code reviews are helpful, it’s too easy to miss something in the details. You may still have someone provide documentation representing their design. Then you could have a QA team deploy the code and validate it matches the design or run automated tools over the code and deployments to perform some kind of validation.
In any case, I like the idea of having an experienced and security-minded team responsible for critical code in a common repository and allow others to propose changes. When you can abstract out common code you may be able to reduce the time and effort to get new applications deployed, and reduce security bugs. When less experienced developers propose changes that have security problems to centralized repositories, the more experienced team or who happens to understand the code base better can explain the problem and everyone can learn from the experience. Win-win!
A repository for shared code
In my past refactoring blog posts I created some shared functions specific to a particular subfolder such as the function to deploy an IAM CloudFormation stack. Only the IAM admins should require the IAM Capabilities option when deploying a stack, so that function can remain within the IAM subfolder.
However, we can create a function to deploy CloudFormation stacks for just about everything else that does not require that capability. I created a new “Functions” folder at the root and added a file called shared_functions.sh.
In my particular repository I’m mocking up what could become different repositories managed by different parts of the organization — KMS, IAM, and the repositories managed by developers such as Lambda and Jobs. In the future, if these were separate repositories, teams could download the shared code to use in their deployments and update it periodically if it changes. For now, I’m simply going to include the code by referencing it where it exists in the common folder.
Shared functions
At the moment these are the functions in our shared library. I moved our function over that validates that parameters do not contain an empty string. I added the ability to exit with an error so we can terminate the calling code when the error occurs by adding >&2 after the error message echo and an exit 1 command.

If you were wondering what this line is at the top of all my bash scripts it’s called a shebang.
It tells the operating system what type of code is in the script and what program to use to execute the code. I could put python here if I was writing code and then I could just double click to execute the file instead of typing python in front of the file name. In this case, we are using bash.
The -e at the end indicates that I want the program to exit whenever there’s an error. In the above function, before I had the exit code indicating an error, my scripts didn’t recognize that there was a problem and would continue to execute. Now that problem is fixed.

There’s a function to get an export from a stack, since this is code that can become a bit tricky and error prone.

I retrieve outputs to obtain parameters used in some of the KMS CloudFormation templates.

Finally we have our common function to deploy a stack that doesn’t require IAM_CAPABILITIES.

I added the line above to get the current function name:
func=${FUNCNAME[0]}
That way I can pass it into the validate_param function above. The function can return a nice error message that tells me exactly which argument in exactly which function is receiving an empty string when it should not. I already can think of other ways to improve this code but for the moment we will move on.
Initially I kept the stack deployment function specific to IAM in the IAM subfolder but later I moved it into common functions because I needed to use it across two repositories. I altered the common function so I could run an IAM stack from it. I also realized that I don’t always need to pass in parameters so I had to handle that scenario as well.
One other thing I ran into was passing parameters with spaces into CloudFormation stacks. I had to alter the parameters two different ways. The first method worked when I wrote the AWS CLI command out normally. I had to revise it again when forming the CLI command as a concatenated string as I’m doing below. I go into detail in this post on the changes I had to make.
As I mention in the above post, I’m also not fully up on the security ramifications without doing some fuzzing and pentesting on all if this. But I’m hoping to move to a better programming language with a better way to validate parameters and types in the future so I’m leaving it this way for now.

Since I already had a lot of things calling my function to deploy an IAM stack I added which I can remove later if I want to update all the other code. However it’s also kind of nice to have a function specifically for IAM and be able to pass in less parameters. This approach is similar to the decorator pattern in object oriented programming. I don’t really need to set all the parameters below. That code does nothing but I did so just for readability.

Teri Radichel
If you liked this story please clap and follow:
Medium: Teri Radichel or Email List: Teri Radichel
Twitter: @teriradichel or @2ndSightLab
Requests services via LinkedIn: Teri Radichel or IANS Research
© 2nd Sight Lab 2022
All the posts in this series:
____________________________________________
Author:
Cybersecurity for Executives in the Age of Cloud on Amazon

Need Cloud Security Training? 2nd Sight Lab Cloud Security Training
Is your cloud secure? Hire 2nd Sight Lab for a penetration test or security assessment.
Have a Cybersecurity or Cloud Security Question? Ask Teri Radichel by scheduling a call with IANS Research.
Cybersecurity & Cloud Security Resources by Teri Radichel: Cybersecurity and Cloud security classes, articles, white papers, presentations, and podcasts