Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Anatomy of a Cloud Infrastructure Attack via a Pull Request (goteleport.com)
80 points by twakefield on Sept 18, 2021 | hide | past | favorite | 14 comments


Shameless plug for something I've been working on: https://github.com/ovotech/gitoops/

I wrote GitOops to map attack paths through GitHub and CI/CD systems, at scale.

As an ex-pentester, for most companies I got to work with, all you need to do is open a PR against the right repositories to take over sensitive production environments. I suspect for most companies, an attacker compromising a single employee/intern with GitHub/Lab access is enough to lead to a disaster scenario.


How can they be avoided without stopping the use of CI/CD? I assume it is the same with GitLab?


> How can they be avoided without stopping the use of CI/CD?

Use separate systems for CI and CD, and don't put sensitive "keys to the kingdom" credentials in CI. For example:

Put CI in GitHub Actions or GitLab CI without any credentials to write artifacts or knowledge of stage/prod deployments. Let the "interns" in the threat model use this.

Put production CD/release in Jenkins or a similar self hosted, not publicly accessible system. Limit the folks who can trigger jobs in this system to a small group of trusted employees, and don't trigger runs on actions that don't require U2F auth (e.g. require a manual click through the webui protected by SSO, or only deploy from specific branches protected to only allow approved PRs -- no git client pushes).

> I assume it is the same with GitLab?

Yes. While GitLab does offer some secret and variable masking controls, the Travis disclosure earlier this week where all secrets were exposed to Pull Request CI shows you probably don't want to bet your business on those controls. (Acknowledging GitLab != Travis)

See https://travis-ci.community/t/security-bulletin/12081


Use protected branches (only inject prod secrets on master, which can't be pushed to) and have test secrets for other branches. Now your weak spot is only anyone who can hit merge on a PR to master, which is easy to control.


On gitlab, there has been another way for some time: There is a JWT token CI_JOB_JWT available in an env var which contains the branch name and other info as one of the claims [1]. One can then use this token to obtain production secrets based on whether the branch is trusted.

Github has the same feature upcoming [2], which allows to get also directly AWS or GCP credentials restricted by branch name [3, 4].

[1] https://docs.gitlab.com/ee/ci/examples/authenticating-with-h... [2] https://github.com/github/roadmap/issues/249 [3] https://awsteele.com/blog/2021/09/15/aws-federation-comes-to... [4] https://github.com/sethvargo/oidc-auth-google-cloud


That is awesome, thank you for telling me!


Attacks here are incredibly common. Fortunately they're usually unsophisticated and are just plain crypto mining to steal CPU cycles.

Worst case is if a CI system has permissions to deploy to production, which is really common too.

Another common one to watch out for is permissions to publish artifacts. It's very common for a CI system to build and test something like a container image, then for another system to promote that image to production. Even when the CI system can't touch production directly, it can still be used to pivot to more sensitive targets.

Great find and write-up from the teleport team.


For every company I've worked at, the CI system basically had admin access over our infrastructure. It has to in order to do infrastructure as code.

As the article states, accepting public pull requests and letting them run on your internal CI is a big mistake.

Public CIs are fine though. Ones that literally only do code builds, tests etc


> the CI system basically had admin access over our infrastructure. It has to in order to do infrastructure as code.

> Public CIs are fine though. Ones that literally only do code builds, tests etc

I couldn't agree more.

Even internally, the security and authorization needs of deployment/release are wildly higher than those for running an ephemeral build and test. "CI/CD" needs to be un-bundled, for the sake of security, such that CI doesn't have admin access over infrastructure. Only a much more limited CD has this access.

In the case of open core products that use public facing CI, I'm inclined to put the average employee's CI on the public system; for transparency, but also to make sure external contributors don't become second class citizens using an irregular workflow/toolset. Maintain a separate internal release system limited to trusted employees. Principle of least privilege, and all that. :)


You don't have to attack cloud infra with actual code. You bribe company employees, either of the target company or one that creates systems for them/has access. Or simply scam 1 employee and use your RAT to later infiltrate. Scammers have been going pro and due to the many ridiculous policies of companies are not diffable from real emails/calls. It's easier and more effective, and it gets swept under the rug because closed source gets no public view and companies don't like revealing every time a employee falls for a scam.


That's a really interesting read. I bet the DIND pattern is very common because it is a) common to run CI jobs in containers, b) common (and a good idea) to describe testing environments in containers inside the source code repo and c) a good idea to use the same source for a) and b).

One particular instance is gitlab where the declarative pipeline demands a docker image. If your repository comes with a docker description for test execution, you are pretty much forced to run DIND.


Hi! I wrote this. I’m happy to answer any questions.


Thank you for the detailed writeup. This is a topic which I think is not discussed much.

> We will split public-facing CI from release infrastructure and internal CI infrastructure. (teleport#8268)

Did you also consider some form of out-of-band approval mechanism for production environment access? (via a chatbot / push notification etc). I think something like that might work technically, but scalability might be a challenge. It might be easier to manage in comparison to a self-managed complete second CI system though. I have been pondering over it for some time to be able to utilize Gitlab CD without providing Gitlab all keys to the kingdom.


> Did you also consider some form of out-of-band approval mechanism for production environment access?

No, not before your comment at least. Vendor CI tools (be it GitLab, Drone, etc) often make it difficult to use this workflow. Their typical model is long lived static creds, and gating authn/authz around job kick off. I'm not aware of any that would work with delegated/approved credentials, at least without writing a custom secrets plugin. If anyone knows of such capabilities, give a me a holler.

Furthermore, there is still the risk of any service available to external contributors being compromised (as we saw in the this vulnerability). I'd just as soon have "no prod secrets touch a system that does external CI" as a security invariant -- no matter how trustworthy that external CI system is.

In a bittersweet irony, out-of-band approvals are in our product:

https://goteleport.com/blog/workflow-api/

but we're not there with CI yet. :/ It would be fantastic if we could have short lived credentials issued only for the duration of the job, after approval (or better: after delegation) from a trusted party. Something like AWS's `CalledVia`.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: