Content editors should not be able to add arbitrary code to a bank's website unless it undergoes review from someone who understands web security. If there is some kind of content editing tool, it should only allow content (not arbitrary scripts) to be edited.
Until about a year ago I was working as a FE developer for a major intenrnational bank.
All the processes and knowledge were in place to make sure all considerations were taken with our software with regards to security. But... all that good work and intention goes out the window when the marketing and analysis teams could pretty much, on a whim dump any old JS onto a production page via GTM. During my 18 months there, there were numerous issues (thankfully not security issues - at least that we know of) indroduced via this method inc a full outage of the customer onboarding journey.
I see GTM being used (abused?) by marketing teams regularly, but I'm really surprised that a bank with its own development team would allow it.
It is really powerful and sometimes incredibly useful in some scenarios (e.g I once built a schema.org metadata system that scraped the pages on the fly for a site with a broken CMS). Simo Ahava does clever things with it.
But from what I can tell, it seems to be a way of avoiding communication between teams, or a political power grab inside bigger companies - a parallel CMS. And the silly bit is that it's normally not doing much more than could be achieved by copy and pasting a few lines of code into a template.
I was once investigating "partner reporting that our embed loads slowly". The investigation result was something like: their HTML injects JS, which injects another JS, which injects GTM, which injects an SDK, which injects the embed.
Of course it all loads only when the user does not have any adblocking or tracking protections enabled.
It's a Google backdoor for your team to add more tracking etc.
The important point is that it's a backdoor for marketing (and adtech) teams to get around developer/security requirements. At some point, someone on those teams gets frustrated that their one-line code requests (just load this script! add a gif banner here!) keep falling behind in the backlog. That happens in part because the product team often doesn't care about marketing, and sometimes because developers know that "just one more script!" paves the road to hell. At some point the third-party that's trying to get their business going through your business convinces the marketing team to add GTM, the marketing team says to the dev team "Hey we need GTM to implement THIS script". This time, because the other side has promised them $$$ in terms ROI, the marketing team pushes really hard for it, and eventually a product manager approves the request to get them off their back. The rest, as they say, is history (at retro time, multiple times down the road).
Well the clue's in the name. But I'd argue that Google analysing metadata about who's loading what/when through GTM is a lesser evil, when compared to normalising everyone sticking megabytes of mystery scripts on their sites with the tool.
I'd say 'frontdoor' given that the standard first tag to implement is Google Analytics. But I am sure they also generate some data for their own use about the number and types of tags that each site is adding via GTM.
You can tell by the URL path (it's under /content/dam) that it's served by Adobe Experience Manager (Adobe's CMS, dam stands for digital asset manager, where you store static assets like images and js). The script itself is "target.js", which is Adobe Target - their A/B framework - which "supports custom code insertion"[1] similar to a tag manager.
It's not GTM, but this is like loading the GTM script itself from archive.org.
It's worth noting that AEM is often very badly set up, following requirements from managers who have no idea or concern about web development, and later maintained by low cost content editors who barely know some HTML. Moreover, this CMS seems to be a standard for big sites even though the licenses are costly, development is slow and complicated, and it adds a lot of human hours to the site maintenance.
> All the processes and knowledge were in place to make sure all considerations were taken with our software with regards to security. But... all that good work and intention goes out the window when the marketing and analysis teams could pretty much, on a whim dump any old JS onto a production page via GTM.
That's what's great about content security policies: put a CSP on the page, and when people try to add scripts without going through proper processes it just won't run.
Yes, but it won't allow GTM to load scripts off OTHER domains. It basically re-adds the requirement of engineering gating off 300 different adtech trackers.
I work for a large media comp and this is exactly the reason we don't give editors access to GTM (or even all developers), nor do we allow tools such as Optimizely intended for A/B testing, and we went away from letting them paste HTML into articles to include custom elements.
Unfortunately, we still have lots of 3rd party stuff in GTM, and technically any ad can run random js on our sites. Thats where most problems come from these days.
Editors still technically have the power to brick a given site through our old inhouse CMS which has no proper access control levels, hopefully not for long...
> technically any ad can run random js on our sites
If you configure your ad network to run all ads cross-domain, then they are very limited. For example, in GPT (which I work on) you call "googletag.pubads().setForceSafeFrame(true)".
"customer onboarding journey" sounds altogether twee for a major international bank. Banks are mattresses with insurance policies. Why is there even a journey to be broken?
Because they have to know who you are in a fair bit of detail both to comply with the law and to know who they should let take money out of the account. Because they need you to agree to a bunch of contacts. Because they need to get you things like a bank card. Because they need to decide if they want to lend you money (most often in the form of a credit card). And so on and so forth.
Fun story: I had a Barclaycard once. I chose to get bills/invoices via snail mail because I'm old fashioned. Didn't use the card for almost a year after signing up and then finally bought something online with it. Got an invoice next month and paid it. Then got another invoice the next month over 60ct for the invoice sent via snail mail. Fair enough, they stated that when I signed up. But then I got another bulk the following month because, you guessed it, the invoice for the invoice was sent via mail. I tried to be clever then and paid 1.20€, but that doesn't work since I still got a letter saying there's still enough money in my account covering the invoice. So after four or five months I just gave up and canceled my account.
Adding scripts might be considered OK (content editors could be given light-weight developer tasks), but modifying the site's CSP definitely isn't. There aren't many things you can do to stop an attacker injecting code in to your site, but having a CSP that whitelists servers under your control and blocks everything else is something that you can implement. Changes to the CSP should not be done lightly.
The fact that the CSP is whitelisting archive.org makes this look like an attack to me, or at least a test run before a real attack. I don't believe this was a simple mistake.
I've audited e-banking websites before. Every file, every element needs to be accounted for. This is beyond "an honest mistake" and it should have been caught by any of the (many) scans the bank orders for its websites. And the scan/report should have caught that. Who does their scans/reports? What is the scope of these scans? Who reviews these reports?
This is not about content. Content is "make a new page with that template and add a photo and the new text about XYZ product". Not a new functionality/code.
I wonder who signed this off prior to.release and what does their wiki/Jira mentions.
Edit: e-banking and other banks websites/online presence(s).
Edit2/rant: I have been the go-to audit/sec/compliance guy for more than a decade. It amazes me in this forum that there are very few discussions/POVs on the audit/security element. In most cases (like this one), an experienced auditor would have picked this up in 20mins. If only Barclays (in this case) bothered to escape the fixed-10 years old checklist/audit program these would never have happened.
> This is beyond "an honest mistake" and it should have been caught by any of the (many) scans the bank orders for its websites. And the scan/report should have caught that. Who does their scans/reports? What is the scope of these scans? Who reviews these reports?
It is partially baffling at least. We had to do a standalone website ad campaign for a bank (same level/recognition as barclays) once, some kind of rewards program where all you could do was browse what was possible to redeem your points for. No accounts, no cart, no integration whatsoever, it was basically a static site (with some user interactivity). We'd get scanned constantly and were always made aware of mistakes or issues, one of them being 3rd party resources.
On the other hand, I wasn't too impressed with their security scanning. Two issues had come up that made me dubious of what all good this actually did.
First was some kind of ssh vulnerability. We were using an older (but still supported) amazon linux ami. The ssh was actually patched but the version did not match what they wanted in the security tool's version. We had to get them to talk with AWS to ensure it was an actual patched version of ssh and the security scanning tool just wasnt accepting it as valid.
The other was some kind of javascript vulnerability in a library. There was an open cve on the library, an open issue on github, with comments on how to fix it, but the library itself wasn't being updated. I manually patched it and named it version x.y.z-patched or something. Report comes back that its still a vulnerability. I'm like what? Impossible I just patched it and tested it myself. The CVE no longer works. So I just renamed it to x-patched.y.z and poof, we pass validation.
This was my version time with vulnerability testing and I was kinda let down by it. I assumed they had people (or tools) actually trying to exploit the site like how I tested the patch for the javascript library prevented the CVE from working instead of just looking for version numbers and comparing to a list of known issues.
I get what you are saying but I also think the parent has a point, if it was that easy to do an end run around the scanning (accidentally or otherwise) then it's not really suitable for auditing external parties. For your internal teams you can probably have some level of confidence they will fix it in good faith.
Scanning is useful to automate a "thing" but your audit is multifaceted and should probably require peer review (pull/merge requests).
In simple terms, scanning should _hopefully_ help a dev who wants to do the right thing, but it won't help a malicious dev, so you're gonna need something else.
Peer review, least privileged access, protective monitoring, etc.
When I was coming up as a CMS operator there were a lot of jobs to undertake site updates for banks and law firms - the JDs were always caveated about needing to understand both regulatory as well as security issues.
I never applied as I like pushing the boundaries when it comes to using CMSs, but if they're calling it out from Day One it's also on the operator to not do it even if they could.