oracle@dætalytica:~$

Privacy -- Why Should I Care?

Dætalytica __ __ __ __ _
/ __ _
_ _ / /_ / / __/ /()__ _ / / / / __ ` _ \/ / __ / / / / / __/ / ___/ __ / / // / // __/ // // / / // / // / // // / /__/_,/_/_,//_, /_//___/_,/
/
_/
EMPOWERING THE PEOPLE TO TAKE CONTROL OF THEIR DIGITAL LIVES.

Privacy & Information Security

“Privacy is necessary for an open society in the electronic age. Privacy is not secrecy. A private matter is something one doesn’t want the whole world to know, but a secret matter is something one doesn’t want anybody to know. Privacy is the power to selectively reveal oneself to the world.” A Cypherpunk’s Manifesto

OSINT Data & Privacy Why Does It Matter? People say that if you’re not doing anything wrong, you shouldn’t have anything to hide. But if that were true…why do we put blinds in our windows? You wouldn’t want your current boss to see your new job searches, would you?

We act differently in public when there are people watching us than when we’re alone or surrounded by our closest friends. The reality is that we all have things that we would like to keep private. Privacy is an internationally recognized fundamental right and is essential to democratic societies. Even things we don’t think are worth hiding might later be used against us in unexpected ways. Things like age, ethnicity, gender, health, political opinions, religious beliefs, or who we associate with shouldn’t be relevant to make judgements about us.

However whistleblowers such as Edward Snowden have shown us the extent to which our privacy is being invaded on a daily basis. Affronts to our privacy are often justified as the necessary cost of national security or personal safety, as is the case with the Patriot Act’s anti-terrorism initiatives. Mass surveillance is framed as a price worth paying for a public good. But so what? Many people go about their day believing that surveillance programs are not directed at them, but rather society’s criminals. However, indiscriminate surveillance has become routine, capturing both our online and offline activities, including text messages, phone calls, browser history, and location. And while such data gathering activities are common practice, their necessity and efficacy is questionable. For example, in 2016 the FBI proposed a program to capture the internet search history of middle and high school students. This program was framed as an attempt to identify adolescents “at risk” for recruitment by extremist groups — despite a lack of published evidence that online recruitment of adolescents is even an issue. It is not only government organizations that conduct surveillance, it is also the corporations and industries we interact with daily. Whether you are aware of it or not you can be followed around all over the internet. The things you like on social media, your search history, and even the apps you download are all compiled to create a digital picture of you to be sold to a third-party advertiser. In reference to non-paid services such as Facebook, a word of caution: “if it’s free, you’re the product.” Information is power. The more someone knows about us, the more influence they can exert over our decisions, behavior, and even our reputations. At best this can influence your decision-making, as with advertisements, but at worst it might cause great harm, as with the release of information about an individual’s non-protected identity, such as sexual orientation, to discriminatory employers. Privacy measures help ensure that you are empowered to choose with whom, when, why, and how your information is shared. Privacy is a complex issue, and we are still navigating how to address these concerns in a 21st century context.

How to Claim Your Digital Identity Seize Privacy | Fortify Security | Forestall Analysis | Dætalys 1,946,181,599. That’s how many records containing sensitive or otherwise personally identifiable information (PII) were reported to have been compromised between 01JAN2017 and 20MAR2018 alone.

Cleaning up locally:

History, cookies, and cache (HCC) while they have their functional purposes, they are bad for privacy. Websites and ISPs can both view and record data associated with them. So make sure your browser is always configured to delete HCC. If you want to go the extra mile on making sure your HCC data, as well as any other data you delete, is permanently shredded: BleachBit is FOSS and outperforms CCleaner, which is proprietary and they don’t clearly explain what they do with your data. Be sure to be cleaning your HCC on any device and browser you use. And remember (in reference to file deletion), if you didn’t shred those files, they’re recoverable.

Cleaning up online:

Go into your E-mail client and search “sign up” or “verify”. If you’re like me, this won’t produce any results. But if you haven’t deleted all of those E-mails you get when you first open an account, this should produce some good starting points for what accounts you have that you may want to have permanently deleted. If any account that you signed up for experiences a breach, any accounts that share that E-mail address and password combination are also compromised. So securing the accounts you still use is a top priority. Bitwarden kicks ass at that. But a word of caution on the password length you set for it to generate — lookup what the service’s password restrictions/requirements are. Don’t create a 128 character password for a service that only accepts up to 16 characters or you’ll be creating a new password again the first time you try to sign in. 32 characters, generally speaking, is going to be accepted by almost anyone. However, an encryption key can not be considered “safe” if it is less than 75 bits (50 bits if you ask the US government). So do with that information what you will. I personally make my passwords as long as possible. But that typically turns into me generating upwards of 6 before I have one that I can sign in with.

Go to every individual account you wish to permanently delete and go through the process of deleting (not deactivating) the account. This can be super simple or lengthy and complicated. Every service varies. https://justdeleteme.xyz is a great resource. But you can also do a simple Google search. Some accounts are, unfortunately, permanent. If it is a service you are certain you have no more use for, falsify all information before signing out.

Next, go to Google’s My Activity page (https://myactivity.google.com/). Have a look around at what information Google has compiled on you. If you can’t justify an absolute NEED for keeping any of that data within Google, go through each section and delete EVERYTHING. Then, go through each section and turn everything off to stop at least that much of Google’s data collection. Have an established NEED before deciding to keep something on. Anything that you do not delete and turn collection off for becomes another thing you have to closely monitor.

Onward to https://www.google.com/ and search for your E-mail address. Do the same thing at https://haveibeenpwned.com. This is all of your leaked data just chilling, exposed for anyone to see. If you have a single result, you have a real emergency that needs your immediate attention. Change your password or close the accounts. Also change the password of any other account using that password. Bitwarden is king here! Lastly, people searching sites. Go to https://pipl.com/ and look for yourself. You will most likely find yourself, most anyone would. This is data collected about you that’s been bought and sold countless times. Go to https://www.anywho.com/ and search your phone number. You’re welcome to continue hitting up people searching sites, but these tell you enough. It’s completely up to you to have this information removed. Contact the web site’s support team, contact the owner. Some good resources for this are https://www.computerworld.com/article/2832912/how-to-get-your-personal-data-removed-from-people-search-websites.html and https://www.computerworld.com/article/2849263/doxxing-defense-remove-your-personal-info-from-data-brokers.html. Feel naked yet?

Social media!

Avoid using your full name. Private account utilizing as many privacy options as possible. Minimize data access (e.g. Facebook doesn’t need your address, Snapchat doesn’t need your location, only input or allow access to what information is NEEDED). Don’t use “sign in with Facebook” type shit for ANY reason unless Facebook (or whatever service you’re signing in with) owns the service you’re signing into. Try to avoid showing your full face. Stranger danger! Only people you actually know.

Web Browsers Browser Fingerprint – Is your browser configuration unique? Your browser sends information that makes you unique amongst millions of users, and therefore, easy to identify.

When you visit a web page, your browser voluntarily sends information about its configuration, such as available fonts, browser type, and add-ons. If this combination of information is unique, it may be possible to identify and track you without using cookies. Electronic Frontier Foundation created a Tool called Panopticlick to test your browser to see how unique it is.

You need to find what most browsers are reporting, and then use those variables to bring your browser in the same population. This means having the same fonts, plugins, and extensions installed as the large installed base. You should have a spoofed user-agent string to match what the large userbase has. You need to have the same settings enabled and disabled, such as DNT and WebGL. You need your browser to look as common as everyone else. Disabling JavaScript, using Linux, or even using the Tor Browser Bundle, will make your browser stick out from the masses.

Modern web browsers have not been architected to assure personal web privacy. Rather than worrying about being fingerprinted, it seems more practical to use free software plugins to regain control. They not only respect your freedom, but your privacy also. You can get much further with these than trying to manipulate your browser’s fingerprint.

Related Information How Unique Is Your Web Browser? Peter Eckersley, EFF. BrowserLeaks.com – Web browser security testing tools that tell you what exactly personal identity data may be leaked without any permissions when you surf the Internet. Firefox Hardening This is a collection of privacy-related about:config tweaks that will enhance the privacy of your Firefox browser.

Preparation: Enter “about:config” in the firefox address bar and press enter. Press the button “Accept the Risk and Continue” [FF71+] or “I accept the risk”. Copy and paste each of the preferences below (for example “webgl.disabled”) into the search bar, and set each of them to the stated value (such as “true”). Getting started: privacy.firstparty.isolate = true A result of the Tor Uplift effort, this preference isolates all browser identifier sources (e.g. cookies) to the first party domain, with the goal of preventing tracking across different domains. (Don’t do this if you are using the Firefox Addon “Cookie AutoDelete” with Firefox v58 or below.) privacy.resistFingerprinting = true A result of the Tor Uplift effort, this preference makes Firefox more resistant to browser fingerprinting. privacy.trackingprotection.fingerprinting.enabled = true [FF67+] Blocks Fingerprinting privacy.trackingprotection.cryptomining.enabled = true [FF67+] Blocks CryptoMining privacy.trackingprotection.enabled = true This is Mozilla’s new built-in tracking protection. One of it’s benefits is blocking tracking (i.e. Google Analytics) on privileged pages where add-ons that usually do that are disabled. browser.send_pings = false The attribute would be useful for letting websites track visitors’ clicks. browser.urlbar.speculativeConnect.enabled = false Disable preloading of autocomplete URLs. Firefox preloads URLs that autocomplete when a user types into the address bar, which is a concern if URLs are suggested that the user does not want to connect to. Source dom.event.clipboardevents.enabled = false Disable that websites can get notifications if you copy, paste, or cut something from a web page, and it lets them know which part of the page had been selected. media.eme.enabled = false Disables playback of DRM-controlled HTML5 content, which, if enabled, automatically downloads the Widevine Content Decryption Module provided by Google Inc. Details

DRM-controlled content that requires the Adobe Flash or Microsoft Silverlight NPAPI plugins will still play, if installed and enabled in Firefox. media.gmp-widevinecdm.enabled = false Disables the Widevine Content Decryption Module provided by Google Inc., used for the playback of DRM-controlled HTML5 content. Details media.navigator.enabled = false Websites can track the microphone and camera status of your device. network.cookie.cookieBehavior = 1 Disable cookies

0 = Accept all cookies by default 1 = Only accept from the originating site (block third-party cookies) 2 = Block all cookies by default network.http.referer.XOriginPolicy = 2 Only send Referer header when the full hostnames match. (Note: if you notice significant breakage, you might try 1 combined with an XOriginTrimmingPolicy tweak below.) Source

0 = Send Referer in all cases 1 = Send Referer to same eTLD sites 2 = Send Referer only when the full hostnames match network.http.referer.XOriginTrimmingPolicy = 2 When sending Referer across origins, only send scheme, host, and port in the Referer header of cross-origin requests. Source

0 = Send full url in Referer 1 = Send url without query string in Referer 2 = Only send scheme, host, and port in Referer webgl.disabled = true WebGL is a potential security risk. Source browser.sessionstore.privacy_level = 2 This preference controls when to store extra information about a session: contents of forms, scrollbar positions, cookies, and POST data. Details

0 = Store extra session data for any site. (Default starting with Firefox 4.) 1 = Store extra session data for unencrypted (non-HTTPS) sites only. (Default before Firefox 4.) 2 = Never store extra session data. beacon.enabled = false Disables sending additional analytics to web servers. Details browser.safebrowsing.downloads.remote.enabled = false Prevents Firefox from sending information about downloaded executable files to Google Safe Browsing to determine whether it should be blocked for safety reasons. Details Disable Firefox prefetching pages it thinks you will visit next: Prefetching causes cookies from the prefetched site to be loaded and other potentially unwanted behavior. Details here and here.

network.dns.disablePrefetch = true network.dns.disablePrefetchFromHTTPS = true network.predictor.enabled = false network.predictor.enable-prefetch = false network.prefetch-next = false network.IDN_show_punycode = true Not rendering IDNs as their Punycode equivalent leaves you open to phishing attacks that can be very difficult to notice. Source

Firefox user.js Templates arkenfox user.js (formerly ghacks-user.js) – An ongoing comprehensive user.js template for configuring and hardening Firefox privacy, security and anti-fingerprinting. Related Information Firefox Privacy: Tips and Tricks for Better Browsing – A good starting guide for users looking to keep their data private and secure. ffprofile.com – Helps you to create a Firefox profile with the defaults you like. Privacy Settings – A Firefox add-on to alter built-in privacy settings easily with a toolbar panel. Firefox Privacy Guide For Dummies – Guide on ways (already discussed and others) to improve your privacy and safety on Firefox. Online Privacy Through OpSec and Compartmentalization On the Internet, Nobody Knows You’re a Dog Privacy and anonymity on the Internet are perennial clickbait topics. At least, that’s been the case since some of the Eternal September crowd figured out that ‘On the Internet, nobody knows you’re a dog.’ might be an unrealistic expectation. We’ve seen the warnings: ‘You have zero privacy.’ [1999]; Google’s ‘Broken Privacy Promise’ [2016]; ‘confronting the end of privacy’ [2017]; ‘privacy is dead’ [2017]; and ‘technology can’t fix it’ [2017]; ‘Privacy as We Know It Is Dead’ [2017]. There was Eric Schmidt’s classic rationalization, ‘If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.’ [2009]. And recently, there’s been hand wringing about ‘anonymous harassment’ [2015] and how ‘anonymity makes people mean’ [2015]. For a more nuanced discussion of online ethics, see ‘Social Networking and Ethics’ [2012/2015]. In any case, leaving aside argument about whether online anonymity is “good” or “bad”, there’s no doubt that it can be a prudent and effective strategy [2017]. And in any case, there’s nothing new here. These were contentious issues in the 1780s, during public debate on ratification of the US Constitution. But hey, articles in The Economist are still published anonymously:

The main reason for anonymity, however, is a belief that what is written is more important than who writes it.

Why Mass De-anonymization Is Far Likelier Than You Might Expect I’ve written a lot about online privacy and anonymity. Lately, however, I’ve focused primarily on matters of technical implementation. But a recent article about mass de-anonymization has moved me to write more about strategy and tactics. The article is based on a paper by Jessica Su and coworkers about de-anonymizing users by correlating their social media participation and browsing history. That’s too perfect a teaching opportunity to pass up. Anyway, the abstract begins:

Can online trackers and network adversaries de-anonymize web browsing data readily available to them? We show—theoretically, via simulation, and through experiments on real user data—that de-identified web browsing histories can be linked to social media profiles using only publicly available data.

Web browsing histories are collected by ISPs, the online advertising industry, at least some anti-malware firms, and various TLAs. So everyone online is vulnerable to multiple adversaries, who may collude, and leverage complementary data.

Our approach is based on a simple observation: each person has a distinctive social network, and thus the set of links appearing in one’s feed is unique. Assuming users visit links in their feed with higher probability than a random user, browsing histories contain tell-tale marks of identity. We formalize this intuition by specifying a model of web browsing behavior and then deriving the maximum likelihood estimate of a user’s social profile.

OK, but this assumes that people are naive. Using one’s real name online, with just one social network, is an obvious vulnerability. And it’s one that’s easily fixable, as I explain below. Basically, you just replace user / person with persona , use as many of them as you like, and make sure that they’re not associated with each other.

We evaluate this strategy on simulated browsing histories, and show that given a history with 30 links originating from Twitter, we can deduce the corresponding Twitter profile more than 50% of the time.

Impressive. So much for the dismissal that browsing history isn’t `sensitive information’ [2017]. But even so, each user could have several online identities aka personas. Each persona would have its own Twitter account, its own social network, its own set of interests, and so on. And each persona would access the Internet in a different way, using various VPN services, Tor, and combinations thereof. So each persona would have its own browsing history, potentially unrelated to the others.

To gauge the real-world effectiveness of this approach, we recruited nearly 400 people to donate their web browsing histories, and we were able to correctly identify more than 70% of them.

Impressive, indeed. But again, these were naive subjects. I can’t imagine that they were warned, and given the opportunity to be deceptive.

We further show that several online trackers are embedded on sufficiently many websites to carry out this attack with high accuracy. Our theoretical contribution applies to any type of transactional data and is robust to noisy observations, generalizing a wide range of previous de-anonymization attacks.

That is problematic, for sure. ISPs also collect and sell browsing history. Some anti-malware firms may do so, as well. And then we have various TLAs, which likely collect whatever they can, however they can, and from wherever they can.

In the paper’s introduction, Su and coworkers note:

In this paper we show that browsing histories can be linked to social media profiles such as Twitter, Facebook, or Reddit accounts. We begin by observing that most users subscribe to a distinctive set of other users on a service. Since users are more likely to click on links posted by accounts that they follow, these distinctive patterns persist in their browsing history. An adversary can thus de-anonymize a given browsing history by finding the social media profile whose feed shares the history’s idiosyncratic characteristics.

That’s arguably not very surprising. It’s just what people do. Or at least, that’s what naïve people do. And then they point out:

Of course, not revealing one’s real-world identity on social media profiles also makes it harder for the adversary to identify the user, even if the linking is successful. Nascent projects such as Contextual Identity containers for Firefox help users more easily manage their identity online 5. None of these solutions is perfect; ultimately, protecting anonymity online requires vigilance and awareness of potential attacks.

Fight Club (Brad Pitt and Edward Norton) Compartmentalization: Isolation of Military Aircraft Using Blast Walls aka Revetments That’s excellent advice, for sure. But pseudonymity alone is a fragile defense. Once one has been de-anonymized in any context, everything is de-anonymized, because it’s all tied together. There is no forward security. Far more robust is to fragment and compartmentalize one’s online activity across multiple unlinked personas. With effective compartmentalization, damage is isolated and limited. And overall, it’s essential to implement and practice strong Operations Security (OPSEC). But first, before getting into specifics, it’s instructive to consider some examples, showing how easily and spectacularly online anonymity can fail.

Examples: How Easily and Spectacularly Online Anonymity Can Fail To illustrate how online anonymity can fail, I have researched several examples. The mistakes made provide useful context for the discussion and recommendations that follow. The examples all involve criminal prosecutions, because that’s what generally gets reported. Proceedings in many jurisdictions are largely public, and crime reporting is always popular. Anonymity failure per se isn’t newsworthy, and information may be suppressed. Even so, public data about criminal prosecutions may be misleading. Some evidence is typically under protective order. Also, investigators may have employed parallel construction to protect sources and methods that are sensitive or illegal. Such evidence is not even presented to courts, but merely exploited to obtain usable evidence. But what we have is what we have. Finally, hindsight is of course 20/20, and I intend no disrespect to anyone involved in these examples.

Example #1: Silk Road Consider how FBI investigators identified Ross Ulbricht as Silk Road’s founder, later known as Dread Pirate Roberts. As explained in the FBI complaint, he had promoted Silk Road on the Shroomery Message Board and Bitcoin Forum in January 2011, using the handle altoid . Silk Road had never before been mentioned on either site. The posts are still there, so you can follow the path yourself. In Google Advanced Search, specify the exact phrase silk road and the site bitcointalk.org . Execute the search. Then click Tools , and look at custom date ranges around 2011, when Silk Road opened for business. For the range 7/1/2010-12/31/2010, the first result is A Heroin Store - Bitcointalk.org . Search the page for silk road , and you see this post from ShadowOfHarbringer, quoting altoid:

Has anyone seen Silk Road yet? It’s kind of like an anonymous amazon.com. I don’t think they have heroin on there, but they are selling other stuff. They basically use bitcoin and tor to broker anonymous transactions. It’s at http://tydgccykixpbu6uz.onion. Those not familiar with Tor can go to silkroad420.wordpress.com for instructions on how to access the .onion site.

Someone (presumably altoid) has deleted the actual post. Just quotes by ShadowOfHarbringer, sirius and FatherMcGruder remain. I’ll say more about that, later. Here’s a diagrammatic representation of the search process:

Venn diagram about finding altoid That alone wasn’t a fatal error. I mean, who is altoid? But now look at what else altoid posted on Bitcoin Forum. In particular, look at his last post, dated 11 October 2011: “I’m looking for the best and brightest IT pro in the bitcoin community to be the lead developer in a venture backed bitcoin startup company. … If interested, please send your answers to the following questions to rossulbricht at gmail dot com”. Whoops. Now the FBI had a link from Silk Road to Ross Ulbricht.

posts by altoid to bitcointalk.org So how does someone accidentally link their meatspace email address to the development of Silk Road, a Heroin Store ? I have no clue. Perhaps relevant is the fact that he registered the new account silkroad on 28 February 2011. He subsequently used the silkroad account for Silk Road matters, and the altoid account for general Bitcoin ones. I’m guessing that it was sometime in Spring 2011 that he deleted his post about Silk Road in the altoid account. But somehow, he didn’t notice that others had quoted it. The silkroad account was last active on 25 August 2011, about six weeks before the fateful IT pro post by the altoid account. Maybe he just forgot which account had posted what.

The timeline of the FBI investigation isn’t clear from the complaint, but another key win was finding the server. That was far too easy. Agents testified that the server leaked it’s actual IP address, bypassing Tor. It seems that they read about the leak on reddit. They don’t say exactly how they forced the leak, but I suspect that it involved a web server misconfiguration like this. At the FBI’s request, Reykjavik police provided access to the server. And the FBI imaged the disk.

That was a seriously boneheaded mistake. I mean, it was clear by 2012 that Tor onion servers should not have public IP addresses. I recall seeing a guide about that in 2010-2011, either on The Hidden Wiki or Freedom Hosting. But anyway, bad as it was for the FBI to have that data, how did they figure out that Dread Pirate Roberts was Ross Ulbricht? Other than the altoid screwup, I mean. Well, the complaint alleges that the server’s ~/.ssh/authorized_keys file contained a public SSH key with user frosty@frosty . So apparently, the FBI googled for stuff like frosty tor . And bam, they found this 2013-03-16 post by frosty on Stack Overflow. That’s still on the first results page. Also, the PHP code in that question is reportedly similar to what FBI investigators found on the server. And being the FBI, it wasn’t hard for them to learn that Ross Ulbricht owned the account (with email frosty@frosty.com ). Now they had two independent links from Silk Road to Ross Ulbricht.

And there was a third link. Ross had apparently ordered fake IDs from Silk Road. But DHS opened the package, and dropped by to question him. He denied responsibility, and noted that anyone could have bought the fake IDs on Silk Road, and had them sent to him. That seems reasonable, no? I mean, a Ukrainian hacker did have heroin sent to Brian Krebs, and then had him swatted. But whatever. Silk Road went into the DHS agent’s report, and that eventually came back to bite Ross.

OK, so promoting your illegal darkweb site online is fine. And asking questions online about that site is also fine. But you want to be as anonymous as possible when you’re doing that stuff. And posting your meatspace Gmail address, or using a forum account registered with that address, is not anonymous. Ross was also careless in other ways about linking Silk Road to himself. If he had always worked through Tor (or better, hit Tor through a nested VPN chain) and had used pseudonyms to register with Stack Overflow and Bitcoin Forum, he might be a free man today. If you want to read more about Ross Ulbricht, the grugq has published a comprehensive (albeit dated) analysis. There are also decent articles in Wired and Motherboard, and Gwern’s analysis.

But wait. There’s another level of pwnage to explore. Maybe it’s simplistic to say that Ross Ulbricht is Dread Pirate Roberts (DPR). His attorneys argued that he was just a pawn, and that the real Dread Pirate Roberts was his mentor Variety Jones aka Cimon. For example, they presented evidence that someone was accessing the DPR account on the Silk Road forum for six weeks after Ross Ulbricht had been taken into custody. Plus voluminous chat logs between Ross Ulbricht, Variety Jones and others. It’s an interesting story, full of intrigue and drama, involving rogue FBI agents and so on. But here’s the relevant lesson: according to the complaint, Roger Thomas Clark was identified as Variety Jones “through an image of his passport stored on Ulbricht’s computer”. That is, “the Silk Road administrator insisted on his employees revealing their identities to him, though he promised to keep the copies of their identifying documents encrypted on his hard drive.” So maybe Variety Jones wasn’t a perfect mentor, notwithstanding his vision of a private digital economy. Still, he’s for sure no pushover.

If you’re interested in reading Variety Jones’ stuff from Silk Road Forums, the archives are here, and in more usable form, here. I gather that there’s also a lot in the chat logs that Ross Ulbricht retained. But I haven’t found a coherent standalone collection. For background, see Andrew Goldman’s The Common Economic Protocols and Toward A Private Digital Economy .

Example #2: KickassTorrents Consider KickassTorrents. Artem Vaulin registered one of the associated domains (kickasstorrents.biz) using his real name. That’s basically the same error that Ross Ulbricht made with Stack Overflow, but it’s far more egregious here, because of the direct association. Also, logs from Apple and Facebook linked his personal Apple email address to the site’s Facebook page. That was another failure to compartmentalize his real identity from his illegal enterprise. But for those mistakes, KickassTorrents would likely be serving its users, and we would have likely never heard of Artem Vaulin.

Example #3: The Love Zone Failure to compartmentalize also brought down The Love Zone and many of its users. Admin Shannon McCoole (skee) reportedly began his posts with the unusual greeting Hiyas (perhaps from Tagalog). That’s strange, but so what? Well, it seems that investigators unoriginally googled for skee hiyas , and found posts on various online forums by similarly named users, who used the same unusual greeting. On one of those forums, such a user had sought information about 4WD lift kits. So investigators then restricted their searches with suggested SKUs. And that led them to his Facebook page, where he had bragged about his vehicle. There, they also learned that he worked as a nanny. Busted.

OK, so it’s outstanding that they tracked him down. But even better, his mistakes are instructive. It’s much like the compartmentalization failure that pwned Ross Ulbricht. That is, Shannon McCoole linked his pedophile and meatspace personas through two factors: 1) similar usernames; and 2) unusual greeting. However, he apparently did successfully obscure his site’s IP address. So arguably, if he had used a distinct username and style (at least, a different greeting) on each online forum, he could have avoided arrest.

Example #4: Sabu de LulzSec Sabu’s downfall clearly illustrates the roles of intentionality, trust and time. Sabu (Hector Xavier Monsegur) was born in 1983, and started hacking in his early teens. He reportedly hung out on EFnet IRC chat servers. Like most n00bs, was careless. At least once, he apparently made the mistake of logging in without obscuring his ISP-assigned IP address. And someone, perhaps the admin, was retaining chat logs. That’s to be expected. But based on those logs, they could link his various IRC nicknames, over time.

Years later, Sabu became famous through LulzSec. I gather that he was playing elite hacker to a crowd of script kiddies. That apparently offended some of his old EFnet associates. Plus the fact that LulzSec was causing trouble for them, professionally. And so they considered him a jerk, and eventually doxxed him.

Before researching this, based on casual reading, I had assumed that Hector had just been careless about OPSEC. But no, it’s not that Hector the LulzSec star was careless. It’s that Hector had been careless, many years before, when he was just a kid, playing at being a hacker. And that mattered, years later, because old associates could link his past personas back to the present. Still, he could have been more mindful of that risk, and so compartmentalized his personas more carefully across time. I mean, this guy had been hacking stuff for well over a decade!

Aval0n logs about Sabu Example #5: Sheep Marketplace It’s arguable whether Tomáš Jiříkovský operated Sheep Marketplace, or merely provided hosting for the VPS that it ran on. But it’s pretty clear that he stole 96000 BTC from it, and then pwned himself when he cashed out. The story is instructive, and it illustrates how pride and greed can lead to stupidity and pwnage. Sheep Marketplace was created in March 2013. It grew modestly after Silk Road was pwned in October 2013. But before long, Tomáš had been doxxed as the alleged owner. Gwern Branwen bet that Sheep Marketplace would be dead within the year. In a later paste, Gwern alleged that someone had alerted the FBI that Tomáš had complained on sheepmarketplace.com in 2013 about the problems of running a Bitcoin-using hidden service . Also see this paste, perhaps from Gwern’s source. Anyway, Sheep Marketplace had started as a clearnet site, and then migrated quite obviously to Tor. And it was dead in far less than a year. Sheep Marketplace shut down less than two months later, on 03 December 2013, after claims of hacking and Bitcoin theft. But it’s more than a little suspicious that the Bitcoin price jumped from $200 to $1000 during November 2013. If one had been planning to take the money and run, that was arguably a good time.

In a vain attempt to recover lost Bitcoins, or at least to identify the thief, some redditors tracked suspicious Bitcoin through the blockchain. Although the thief apparently used Bitcoin Fog for obfuscation, 96000 Bitcoin predictably overwhelmed the mixer. So the stolen Bitcoin was traced to a wallet owned by BTC-e, a digital currency exchange. But there, the trail went dead. The BTC-e wallet identified by redditors was used generally in BTC-e operations. So it seemed likely that the thief had already cashed out. However, in contrast to the Bitcoin blockchain, BTC-e’s financial operations are anything but public. And now, the US has taken it down, and arrested one Alexander Vinnik. Allegations include money laundering and facilitation of criminal activity, such as ransomware and theft from Mt Gox. But maybe BTC-e isn’t yet entirely dead.

Anyway, in an 08 December interview in the Czech Republic’s major newspaper, Tomáš disavowed any role in Sheep Marketplace. However, by January 2014, Tomáš had been arrested:

Last year in January, a new bank account of 26-years old Eva Bartošová had a payment, that made Air Bank (Czech Bank) safety controls flash (an idiom). Almost 900 thousand Crowns from a foreign company that exchanges virtual bitcoins into real money.

The young woman could not credibly explain to the bank officers the source of the money. Only additional investigation revealed that millions already went using this road. And that behind it was a certain Tomáš Jiřikovský, that was connected by amateur internet investigators with stealing money from web marketplace Sheep Marketplace, where people traded in large numbers with the bitcoin currecy. The damage was described by the operators of the marketplace as more than 100 million.

The officers of Ministry of Finance’s Financial Analytical Office, that are detecting suspicious transactions, mapped how the Jiřikovský’s money travelled. They first left from the abroad company Bitstamp Limited, that is selling and buying bitcoins. The millions then arrived with several transactions either on the account of Jiřikovský and Bartošová, or on the account of the real estate company and the lawyer that worked on the house sale. Part of the money went to the original owner of the house, another part of the money went on her bank as one-time payment of a mortgage.

I’m guessing that Tomáš must have somehow transferred the money from BTC-e to Bitstamp. It didn’t help, however. Overall, this was a mind-boggling fail.

Example #6: Operation Onymous In November 2014, hundreds of onion sites went down in Operation Onymous, an international effort involving the FBI and Europol. One of them was Silk Road 2.0 aka SR2. The scale of the operation was astounding. Nik Cubrilovic speculated that investigators had ‘simply vacuumed up a large number of onion websites by targeting specific hosting companies.’ But those who followed Tor carefully suspected a different sort of vacuuming. In July 2014, CMU researchers had canceled a Black Hat talk about ‘how hundreds of thousands of Tor clients, along with thousands of hidden services, could be de-anonymised within a couple of months.’. And a few days later, Roger Dingledine had posted about a ‘relay early’ traffic confirmation attack which had occurred in recent months: ‘So in summary, when Tor clients contacted an attacking relay in its role as a Hidden Service Directory to publish or retrieve a hidden service descriptor (steps 2 and 3 on the hidden service protocol diagrams), that relay would send the hidden service name (encoded as a pattern of relay and relay-early cells) back down the circuit. Other attacking relays, when they get chosen for the first hop of a circuit, would look for inbound relay-early cells (since nobody else sends them) and would thus learn which clients requested information about a hidden service.’ Yes, vacuuming.

Those suspicions were confirmed in January 2015, after SR2 admin Brian Farrell was arrested. The affidavit stated: ‘From January 2014 to July 2014, a FBI NY Source of Information (SOI) provided reliable IP addresses for TOR and hidden services such as SR2…’. And a year later, CMU’s role was confirmed: “The record demonstrates that the defendant’s IP address was identified by the Software Engineering Institute (‘SEI’) of Carnegie Mellon University (CMU) [sic] when SEI was conducting research on the Tor network which was funded by the Department of Defense (‘DOD’).” So how did the FBI know about results of DoD-funded research by CMU? The FBI says: “For that specific question, I would ask them [Carnegie Mellon University]. If that information will be released at all, it will probably be released from them.” Perhaps this was a failed attempt at parallel construction.

Example #7: AlphaBay This is an especially sad example. AlphaBay became one of the largest third-generation dark markets after Silk Road got pwned. For about two years. Until the US took it down in July 2017, and arrested suspected co-founder Alexandre Cazes. As with my other examples, he had allegedly made a stupid mistake. He allegedly “included his personal email address in one of the site’s welcome messages”. I’m not sure which is more surprising, that he did that, or that it took investigators that long to find the clue. But the saddest part is that he reportedly killed himself after being arrested.

Example #8: Brian Krebs’ Blog No, Brian Krebs has not been pwned for something delicious. But doxxing ‘cybercriminals’ is one of his perennially popular topics. And you will find many examples of compartmentalization failure. Such as these:

Who Is the Antidetect Author? Who Hacked Ashley Madison? Who is Anna-Senpai, the Mirai Worm Author? Who Ran Leakedsource.com? Four Men Charged With Hacking 500M Yahoo Accounts Online Privacy Through OPSEC with Compartmentalization Among Multiple Personas The OPSEC Cycle Common themes in these examples are poor planning, wishful thinking, and carelessness. Given the advantage of hindsight, it’s clear that these people were not paying enough attention. They weren’t planning ahead, and thinking things through. That is, their Operations Security (OPSEC) was horrible. Basically, OPSEC is just common sense. But it’s common sense that’s organized into a structured process. An authoritative source is arguably the DoD Operations Security (OPSEC) Program Manual. OPSEC Professionals also has a slide deck, which is comprehensive and well-presented, but somewhat campy. It points out that the OPSEC “5-Step Process” is more accurately described as a continuous cycle of identification [of information that must be secured], analysis [of threats, vulnerabilities and risks] and remediation. That is, OPSEC is a way of being. For a hacker perspective, I recommend the grugq’s classic OPSEC for hackers. Also great are follow-on interviews in Blogs of War and Privacy PC.

Another great source is 73 Rules of Spycraft by Allen Dulles. Also see the original article about them by James Srodes, from the Intelligencer. Allen Dulles played a key intelligence role against Germany during WWII, and then in the Cold War, as the first civilian Director of Central Intelligence. He’s rather controversial, especially regarding his role in the Bay of Pigs fiasco, and perhaps the JFK assassination. David Talbot wrote a biography, The Devil’s Chessboard. And later opined: “I think that you can make a case, although I didn’t explicitly say this in the book, for Allen Dulles being a psychopath.” The CIA predictably disagreed, albeit rather politely. But noted progressive Joseph Palermo fundamentally agreed with Talbot’s assessment: “The Devil’s Chessboard is quite simply the best single volume I’ve come across that details the morally bankrupt and cynical rise of an activist intelligence apparatus in this country that was not only capable of intervening clandestinely in the internal affairs of other nations but domestically too.” Be that as it may, Allen Dulles had some excellent insights about OPSEC. At least, if you ignore the parts about managing human “assets”.

Identification of Critical Information, Analysis of Threats, and Identification of Vulnerabilities The first step is the identification of information that must be secured. See the DoD OPSEC manual at p. 12. For our purposes, critical information fundamentally comprises one’s meatspace identity and location. Also critical are public indicators associated with them. For example, consider Ross Ulbricht. FBI investigators pieced together his posts as altoid on Bitcoin Forum to associate Silk Road with rossulbricht@gmail.com. They also pieced together frosty@frosty’s SSH key on the Silk Road server with the frosty frosty@frosty.com account on Stack Overflow, which he had initially registered as Ross Ulbricht. That is, the indicators were altoid and frosty . Or consider Shannon McCoole. Investigators pieced together posts on The Love Zone and 4WD forums, using his username (~skee) and his characteristic greeting (hiyas). Then they found his personal Facebook page, by searching for SKUs of particular 4WD lift kits that he had posted about. So for him, the indicators were skee , hiyas , and the SKUs. For Sabu, an IRC admin pieced together his various nicknames, over time, to link his current nickname/persona with his meatspace identity, which had been revealed years before.

The next steps are analysis of threats, and identification of vulnerabilities. From the DoD OPSEC manual at p. 13:

The threat analysis includes identifying potential adversaries and their associated capabilities and intentions to collect, analyze, and exploit critical information and indicators.

Wherever adversaries can collect and effectively exploit critical information and/or indicators, there are vulnerabilities. So who are your adversaries? And what are their capabilities? Anyone interested in you, with goals that you reject and fear, is an adversary. You probably have some sense of who they are, what they want, and what they can do. But what matters? In an interview with Micah Lee, Edward Snowden observed:

Almost every principle of operating security is to think about vulnerability. Think about what the risks of compromise are and how to mitigate them. In every step, in every action, in every point involved, in every point of decision, you have to stop and reflect and think, “What would be the impact if my adversary were aware of my activities?” If that impact is something that’s not survivable, either you have to change or refrain from that activity, you have to mitigate that through some kind of tools or system to protect the information and reduce the risk of compromise, or ultimately, you have to accept the risk of discovery and have a plan to mitigate the response. Because sometimes you can’t always keep something secret, but you can plan your response.

Anyway, none of that is possible without plans. Or at least, it’s impossible without some sense of what one’s plans will be. As Allen Dulles noted:

Never set a thing really going, whether it be big or small, before you see it in its details. Do not count on luck. Or only on bad luck. This is arguably a central theme in all of my pwnage examples. When one is just playing around, with no real plans, or not even a clear sense of what one might plan, one may not worry enough about protecting one’s identity. And after one gets serious, and the stakes get higher, one may forget about just how lax one’s OPSEC was. So do plan ahead, and think things through.

The final steps are risk assessment, and identification of countermeasures. From the DoD OPSEC manual at p. 13:

The risk assessment is the process of evaluating the risks to information based on susceptibility to intelligence collection and the anticipated severity of loss. It involves assessing the adversary’s ability to exploit vulnerabilities that would lead to the exposure of critical information and the potential impact it would have on the mission. Determining the level of risk is a key element of the OPSEC process and provides justification for the use of countermeasures. Once the amount of risk is determined, consider cost, time, and effort of implementing OPSEC countermeasures to mitigate risk.

impact vs likelihood example That is, risks are characterized by their likelihood aka probability, and their potential impact. To help prioritize risks and identify countermeasures, it’s common to visualize them, plotting probability vs impact. From Mind Tools:

The corners of the chart have these characteristics:

Low impact/low probability – Risks in the bottom left corner are low level, and you can often ignore them. Low impact/high probability – Risks in the top left corner are of moderate importance – if these things happen, you can cope with them and move on. However, you should try to reduce the likelihood that they’ll occur. High impact/low probability – Risks in the bottom right corner are of high importance if they do occur, but they’re very unlikely to happen. For these, however, you should do what you can to reduce the impact they’ll have if they do occur, and you should have contingency plans in place just in case they do. High impact/high probability – Risks towards the top right corner are of critical importance. These are your top priorities, and are risks that you must pay close attention to. High-impact/low-probability risks are highly problematic:

[I]t may often be easier to characterise the impact of an event than its likelihood, such as the impact of your wallet being stolen against working out the numerical likelihood of it happening. … People are often unwilling to give credence to improbable notions specifically because their professional or social community consider them too improbable. … In addition, if a problem is thought too complex, there is the danger that organizations will simply ignore it. … More generally, there is often a lack of imagination when considering high impact low probability risks. [emphasis added]

The US National Security Agency (NSA) arguably poses an existentially high-impact/low-probability risk for virtually everyone. That may seem too improbable, but it’s certainly existential, and so worth discussion. But do keep in mind Allen Dulles’ rule 72:

If anything, overestimate the opposition. Certainly never underestimate it. But do not let that lead to nervousness or lack of confidence. Don’t get rattled, and know that with hard work, calmness, and by never irrevocably compromising yourself, you can always, always best them.

Is the NSA Your Adversary? Consider the Risks of Data Sharing and Parallel Construction The NSA is responsible for military signals intelligence (SIGINT). Initially, it was known (at least jokingly) as ‘No Such Agency’, the stuff of conspiracy theories. For obvious reasons, its capabilities and activities are largely classified. We know about them primarily from James Bamford’s books, from such whistleblowers as Bill Binney, Kirk Wiebe and Thomas Drake, and from materials leaked by Edward Snowden and The Shadow Brokers. So our understanding is limited. But even so, the NSA’s capabilities are mind-boggling. More links about NSA are here.

The NSA is a global active adversary. That is, it can (in principle, anyway) intercept, modify and trace all Internet traffic. It has a global grid of computers that intercept data from the Internet, store it, process it, and make it available to analysts. Using intercepts from network edges, it can employ traffic analysis to de-anonymize any persistent low-latency connection, no matter how much it’s been rerouted. And it can arguably compromise any networked device, and exploit it to get additional information. Also, it actively targets system administrators, in order to access to networks that they administer.

However, while the NSA arguably intercepts everyone’s online activity, it can’t collect it all in a single location, because that would require implausibly fat pipes and humongous storage. And it can’t de-anonymize all low-latency connections, because that would require implausible processing power. But analysts can operate in parallel on all grid components, and receive results for local analysis. Data of interest gets moved to centralized long-term storage. But even the NSA can’t store all intercept data indefinitely. So its systems prioritize, and then triage. Data that seems more important is retained longer. But all metadata (time, IP addresses, headers, and so on) are retained indefinitely. And so are data that seem most important. That reportedly includes all encrypted data (but not all HTTPS, I suspect) that could not be decrypted (plus associated unencrypted metadata).

The good news is that the NSA’s charge is national security, and that you are most likely far too insignificant to warrant its attention. However, it’s important to note that the NSA does retain and search data on American residents. Also see this excellent article, and the declassified Memorandum Opinion and Order from the FISA Court. This is supposedly accidental, or incidental, or unavoidable, or something like that. And the FISA Court says to stop. Not that it matters much to the rest of us.

But anyway, who else has access to all this data? Well, we know that the NSA shares with intelligence agencies of US allies. And also gets data collected by them. There are at least three groups of such allies:

Five Eyes (Australia, Canada, New Zealand, the United Kingdom and the United States) Nine Eyes (Five Eyes plus Denmark, France, the Netherlands, and Norway) Fourteen Eyes (Nine Eyes plus Germany, Belgium, Italy, Spain, and Sweden) The rules of SOD We also know that the NSA shares data with numerous US law-enforcement agencies [2013], including the DEA, DHS, FBI and IRS:

A secretive U.S. Drug Enforcement Administration unit is funneling information from intelligence intercepts, wiretaps, informants and a massive database of telephone records to authorities across the nation to help them launch criminal investigations of Americans.

Although these cases rarely involve national security issues, documents reviewed by Reuters show that law enforcement agents have been directed to conceal how such investigations truly begin – not only from defense lawyers but also sometimes from prosecutors and judges.

The undated documents show that federal agents are trained to “recreate” the investigative trail to effectively cover up where the information originated, a practice that some experts say violates a defendant’s Constitutional right to a fair trial. If defendants don’t know how an investigation began, they cannot know to ask to review potential sources of exculpatory evidence – information that could reveal entrapment, mistakes or biased witnesses.

The unit of the DEA that distributes the information is called the Special Operations Division, or SOD. Two dozen partner agencies comprise the unit, including the FBI, CIA, NSA, Internal Revenue Service and the Department of Homeland Security. It was created in 1994 to combat Latin American drug cartels and has grown from several dozen employees to several hundred.

Today, much of the SOD’s work is classified, and officials asked that its precise location in Virginia not be revealed. The documents reviewed by Reuters are marked “Law Enforcement Sensitive”, a government categorization that is meant to keep them confidential.

“Remember that the utilization of SOD cannot be revealed or discussed in any investigative function”, a document presented to agents reads. The document specifically directs agents to omit the SOD’s involvement from investigative reports, affidavits, discussions with prosecutors and courtroom testimony. Agents are instructed to then use “normal investigative techniques to recreate the information provided by SOD”.

[emphasis added]

This is termed parallel construction . Reportedly, it’s long been a standard approach for protecting sources and investigative methods. Such as confidential informants. But the scale here is vastly larger. And the practice is arguably unconstitutional (not to mention, that it entails criminal conspiracy to suborn perjury).

But these are just nonspecific allegations, based on leaked documents and whistleblowers. Is there actually any unambiguous evidence that criminal prosecutions have secretly relied on NSA intercepts? I find nothing online. However, there is an excellent panel discussion from August 2015 at the DEA Museum website, involving former SOD directors and staff, about SOD history. John Wallace was very candid about the motivation to circumvent post-Watergate policies, which had been implemented to prevent warrantless electronic surveillance and eavesdropping on American citizens :

00:18:20 Well, we – we got to step back, and I got to give you some historical context. Remember, when we’re talking now, the early ‘90s. This is at least 10 years before 9/11, uh, and, so, we had two problems. …

00:18:50 The other dynamic that Bobby mentioned was we had, uh, the – the cases in New York, uh, principally en – engaged against the – the Cali Cartel that were simply dying on the vine in New York. Um, on the other hand, we had elements of the intelligence community who said they had all of this great information, but nothing ever came of it. Um, and, again, 10 years before 9/11, the wall is up, it is absolutely prohibited for, uh, anybody on the Intelligence side of the house, uh, to talk to somebody with a criminal investigative, uh, responsibility.

Enemy of the State (1998) 00:20:00 I was fortunate to be in a group of about four or five people, including the Attorney General, Bob Mueller was the Chief of the Criminal Division. Um, uh, uh, a true heroine in all of this was Mary Lee Warren who, at that time, uh, had the narcotics section. Uh, and, so, after meeting with Bobby’s small group, we got together with the senior leadership of CIA, the senior leadership of NSA, and the senior leadership of the Department, uh, of Justice, and began to work these two problems. [emphasis added]

00:20:35 The first problem being: How do we engage with the Intelligence community without compromising their sensitive sources and methods, their equities, without violating this – this wall arrangement; at the same time, breed [sic] life into Bill Mockler’s investigations in New York, and get the U.S. Attorneys all on the same sheet of music with regard to prosecuting these national level investigations that – that Bobby was trying to put together.

00:22:13 We don’t want to have to turn this stuff over in the course of discovery. On the same, uh, token though, the – we’ve got to make sure that the defendants’ rights to full and free discovery are completely observed. [huh?] We don’t want, uh, for example, CIA officers on the witness stand. Um, and – and those were some of the issues that we had to come up with creative solutions. Uh, and – and on occasion, uh, it, uh, it meant we’re – the solution we come at is going to be less than perfect, you know, because we want to, uh, to stay away from some of these electrified third rails on the legal side of the house.

And from Michael Horn:

00:47:31 Well, first, we – when we discussed this coordination between the Intelligence and the Operations Divisions, um, Joe referred to this – it – it was really the mantra at SOD, SOD takes no credit. We – we wanted to make sure the SACs were comfortable with – with our role in – in their investigations, and sometimes they were not. Uh, but by – by stepping back when – when these cases went down and – and assuring that any credit, any publicity, any photo ops, uh, were taken by the field, and SOD just stayed in the background, that went a long way to assuaging some of the – the SAC’s concerns. [emphasis added]

SOD has apparently been part of numerous drug cases, including major operations against cartels, but only two are named. Joseph Keefe mentioned Mountain Express:

00:51:06 A – a tremendous amount of cases. Every section that I had was fortunate they were all very productive. One that comes to mind ‘cause it involved DEA as a whole was Mount – a thing called Mountain Express. Mountain Express was back – well, Jack Riley was the ASAC.

And Michael Horn mentioned two Zorro cases:

00:53:56 Well, I guess the two Zorro cases were – which were two of the first national level cases, uh, come to mind. And, um, it – it was – again, as Joe mentioned, an incredible coordination a – among a lot of field offices. And, of course, the goal was to protect the wires that were going on. At this time, I think there were some wires going on in Los Angeles, and they were following loads to – across the country to New York. …

Even though SOD has allegedly played a major role in a tremendous number of cases since the early 90s, I find nothing online about the use of intelligence data, before the Reuters exposé in late 2013. Although some of the old drug cases are featured on the DOJ website, the use of parallel construction to hide use of intelligence data isn’t mentioned. For obvious reasons. Less than a year before the Reuters exposé, there was no mention of SOD in Senate debate on extending the FISA Amendments Act of 2008 for five years. Without doubt, at least Senator Feinstein was aware of SOD. But again, the reasons for silence are obvious.

Even since the Reuters exposé, I find nothing online about specific cases where investigators allegedly relied secretly on NSA intercepts, and engaged in parallel construction. No defense challenges. No court opinions. Not even anonymous allegations. There was a federal ruling in 2016, suppressing Stingray evidence that was obtained without a search warrant:

U.S. District Judge William Pauley in Manhattan on Tuesday ruled that defendant Raymond Lambis’ rights were violated when the U.S. Drug Enforcement Administration used such a device without a warrant to find his Washington Heights apartment.

The DEA had used a stingray to identify Lambis’ apartment as the most likely location of a cell phone identified during a drug-trafficking probe. Pauley said doing so constituted an unreasonable search.

“Absent a search warrant, the government may not turn a citizen’s cell phone into a tracking device” Pauley wrote.

And yet there’s nothing online about the use of intelligence data in criminal cases. That’s surprising, given likely concerns about constitutionality, and participation in criminal conspiracy to suborn perjury. You’d think that at least one investigator would have turned whistleblower. But then, the NSA has been very careful about protecting sources and methods. I mean, consider 9/11. The NSA and CIA had allegedly monitored some of the plotters, but didn’t manage to convince Secretary of State Condoleezza Rice to act. Whistleblowers claim that key results were “not disseminated outside of NSA”. Basically, I gather that the NSA had compromised parts of al Qaeda’s telephone network, and considered the intercepts too valuable to risk.

According to New America’s Open Technology Institute: “The NSA uses [Section 702] authority to surveil communications that go well beyond the national security purpose of the law.” In recent years, it appears that the FBI has further relaxed its rules for accessing NSA data. And finally, one of President Obama’s last acts was basically to normalize and expand SOD, allowing cooperating federal agencies to directly search NSA data. Perhaps he wanted to facilitate investigation of collusion of Russia and the Trump campaign.

Bottom line, it’s prudent to assume:

The NSA intercepts all Internet data. All SOD partners (such as CIA, DEA, DHS, FBI and IRS) can access that data directly. The NSA shares data with US allies. Many (if not all) investigators in those countries can access NSA data. With that in mind, how might NSA data been used in my pwnage examples? There’s been speculation that two aspects of the Silk Road investigation are implausible: 1) using Google to find altoid’s posts on the Bitcoin Forum; and 2) discovery by DHS of fake IDs sent to Ross Ulbricht. The first claim is weak, given that one can easily replicate the search. But the second seems reasonable, given that relatively few Silk Road packages were intercepted. And given that DHS and FBI are SOD partners, FBI investigators searching for Silk Road would have seen Ross Ulbricht among the hits. It’s also possible that the NSA tipped off the FBI about the Silk Road server, and how to find its IP address.

OK, what else? Well, consider Operation Onymous. Perhaps the FBI might have known, from public sources, that DOD had funded research at CMU on Tor vulnerabilities. But how would the FBI have known that CMU researchers had identified numerous illegal Tor onion services, such as Silk Road 2.0? Perhaps they saw the announced Black Hat talk, subpoenaed the results, and imposed a protective order. But in that case, why did the FBI enigmatically refer questions about Silk Road 2.0 to CMU? Evasiveness creates suspicion. Especially because this was a drug case, and the role of SOD is always hidden through parallel construction.

OPSEC Countermeasures Once risks have been identified and ranked, one must identify countermeasures. One must then assess their effectiveness and cost, relative to potential impacts. And one must assess the “possibility that the countermeasure could create an OPSEC indicator” (DoD OPSEC manual at p. 14). Where warranted by risk and worth the cost, one applies countermeasures. And finally, one assesses the effectiveness of countermeasures in practice. I focus here on four groups of countermeasures: (1) common sense and security mindedness; (2) awareness of egocentrism, pride, vanity and greed; (3) compartmentalization with multiple personas; and 4) technical implementation.

Common Sense and Security Mindedness Loose Lips Might Sink Ships Allen Dulles’ 73 Rules of Spycraft begins with common sense:

The greatest weapon a man or woman can bring to this type of work in which we are engaged is his or her hard common sense. The following notes aim at being a little common sense and applied form. Simple common sense crystallized by a certain amount of experience into a number of rules and suggestions.

He goes on to emphasize the importance of security mindedness:

Security consists not only in avoiding big risks. It consists in carrying out daily tasks with painstaking remembrance of the tiny things that security demands. The little things are in many ways more important than the big ones. It is they which oftenest give the game away. It is consistent care in them, which form the habit and characteristic of security mindedness. In any case, the man or woman who does not indulge in the daily security routine, boring and useless though it may sometimes appear, will be found lacking in the proper instinctive reaction when dealing with the bigger stuff. He also warns against carelessness:

The greatest vice in the game is that of carelessness. Mistakes made generally cannot be rectified. Never leave things lying about unattended or lay them down where you are liable to forget them. Learn to write lightly; the blank page underneath has often been read. Be wary of your piece of blotting paper. If you have to destroy a document, do so thoroughly. Carry as little written matter as possible, and for the shortest possible time. Never carry names or addresses en clair. If you cannot carry them for the time being in your head, put them in a species of personal code, which only you understand. Small papers and envelopes or cards and photographs, ought to be clipped on to the latter, otherwise they are liable to get lost. But when you have conducted an interview or made arrangements for a meeting, write it all down and put it safely away for reference. Your memory can play tricks. The greatest material curse to the profession, despite all its advantages, is undoubtedly the telephone. It is a constant source of temptation to slackness. And even if you do not use it carelessly yourself, the other fellow, very often will, so in any case, warn him. Always act on the principle that every conversation is listened to, that a call may always give the enemy a line. Naturally, always unplug during confidential conversations. Even better is it to have no phone in your room, or else have it in a box or cupboard. Much of this may seem pointlessly old-school. But for those who work with computers and the Internet, there are now far more opportunities to be careless and leave traces for adversaries to find. Traces on our computers. Traces of online connectivity. Traces from browsing, email and messaging. Strong encryption is widely available now, at least. But there’s still the risk from metadata (URLs, email addresses, IP addresses, etc). Smartphones are ubiquitous, and are vulnerable to surveillance and tracking. And people still write on paper, sometimes. There are just so many ways to fail.

Anyway, security mindedness is indeed essential. And for that, it’s crucial to pay attention, to be present to your life:

We train ourselves to see reality exactly as it is, and we call this special mode of perception ‘mindfulness.’ This process of mindfulness is really quite different from what we usually do. We usually do not look into what is really there in front of us. We see life through a screen of thoughts and concepts, and we mistake those mental objects for the reality.

Seeing “reality exactly as it is”, rather than our thoughts and feelings about it, is the basis for security mindedness . Also crucial is seeing ourselves objectively. And thinking through the consequences of every action. Globally, and from an adversary’s perspective:

In addition to being a process, OPSEC is also a mindset.

It means being able to consider your organization or environment from the point of view of your adversary.

This allows you to consider your vulnerabilities from the perspective of the threat based on their capabilities and actions.

It’s rather like activating God mode in first-person shooter (FPS) video games. That’s the default mode in chess and Go, of course.

Anyway, it was traces—carelessly left and/or carelessly forgotten—that pwned the principals in most of my examples:

Ross Ulbricht used his Gmail address on Bitcoin Forum, looking for a coder. He kept everything (including email and chat logs, a diary, and true-name data for all staff) on one encrypted laptop. And he routinely carried and used that laptop in public, providing opportunities for the FBI to seize it. Roger Thomas Clark provided an image of his passport to Ross Ulbricht. So he (and other Silk Road staff) were pwned when Ross was. Artem Vaulin registered kickasstorrents.biz using his real name. Shannon McCoole used the same unusual greeting, and similar usernames, in multiple online accounts. And in one of them, he researched 4WD lift kits, and then bragged about them on his personal Facebook page. Hector Monsegur had linked personas going back well over a decade. Early personas were linked to his meatspace identity. And someone had retained IRC logs, including all of that information. Tomáš Jiříkovský created sheepmarketplace.com before the Sheep Marketplace onion site, and complained there “about the problems of running a Bitcoin-using hidden service”. And after being doxxed as the owner, he cashed out implausibly huge amounts of Bitcoin that he had stolen. As Allen Dulles notes, it’s the little things . Rigorous anonymity may not seem important, when you’re a clueless n00b, when you’re just playing around. Say, when you prototype this cool anonymous online market, like Silk Road or Sheep Marketplace. And then, after it takes off and becomes internationally infamous, you’re just too stressed out to remember such little things. Or say, when you’re starting out with your Pirate Bay clone. Or when you’re 12 years old and learning to hack, and start hanging out on IRC.

Awareness of Egocentrism, Pride, Vanity, Greed and Lust The Seven Deadly Sins (Hieronymus Bosch) Allen Dulles observes:

The next greatest vice [after carelessness] is that of vanity. Its offshoots are multiple and malignant. Besides, the man with a swelled head never learns. And there is always a great deal to be learned. However, according to Jane Austen, in Pride and Prejudice:

Vanity and pride are different things, though the words are often used synonymously. A person may be proud without being vain. Pride relates more to our opinion of ourselves, vanity to what we would have others think of us.

So actually, I think that Dulles is talking more about pride ( swelled head ) than about vanity. But typically they go together, and both are dangerous. Pride leads to overconfidence, and vanity to bragging. Nick Romeo recently blogged some relevant tl;dr from Plato:

… In the Apology, Socrates claims to be wiser than other men only because he knows that which he does not know. When Kahneman writes that we are ‘blind to our blindness’, he is reviving the Socratic idea that wisdom consists in seeing one’s blindness: knowing what you do not know.

Intellectual humility and overconfidence can stem from purely cognitive processes, but they are also correctly understood as moral achievements or failings. Someone who always thinks that he is right about everything, however little he knows, is making a moral as well as a mental mistake. Similarly, the cultivation of intellectual humility is, in part, the cultivation of an ethical virtue.

… This is only a preliminary step in Plato’s dialogues – a (good-natured) reaching after fact and reason should and does occur – but an initial tolerance of uncertainty is a capacity without which individuals and societies cannot adequately self-correct and improve. People who are pained and irritated by not knowing something reach prematurely for whatever apparent reasons are most accessible.

Ironically enough, Jonah Lehrer has written quite eloquently about how smart people make stupid mistakes. The fundamental problem seems to be egocentrism. That is, it’s relatively easy to rationally and objectively evaluate other people’s behavior. But it’s hard to be rational and objective about ourselves. It’s hard to face the facts, and consider what to do about them. We’re often just too attached. Introspection typically opens up a morass of feelings, excuses, rationalization, wishful thinking, blame, and denial. There are also the illusions of being immortal, and smarter than others. Basically, we’re biased. What’s needed are mindfulness and humility.

Consider Hector Monsegur’s comment in an interview after his brief imprisonment: I’ve been hacking since ‘95, … There’s only so much you can do before you get caught. OK, so I can imagine how many criminals would say something like that, especially after being caught. But it’s rationalization. His sins were carelessness and bragging. Plus pushing children into crime, and then snitching on them, according to Ryan Ackroyd (LulzSec’s Kayla). What happens, I think, is that we know (at some level) that we’ve screwed up. But the mechanisms driving our behavior are largely unconscious. Our conscious ego is happy to take credit for success, but it tends to suppress evidence of error. There’s a strong need to be right. And when evidence of error becomes undeniable, the ego may flip to fatalism. And making excuses.

Another trap is greed. Consider Tomáš Jiříkovský. I mean, what else could explain how he cashed out a fortune in stolen Bitcoin, from a darknet drug marketplace, less than a month after being interviewed about alleged connections to said darknet drug marketplace in his country’s major newspaper? But hey, $100 million is undeniably tempting. It’s likely that greed also dissuaded Ross Ulbricht from giving up Silk Road.

Dulles also warns about sex and alcohol:

Booze is naturally dangerous. So also is an undisciplined attraction for the other sex. The first loosens the tongue. The second does likewise. It also distorts vision and promotes indolence. They both provide grand weapons to an enemy. It has been proved time and again, in particular, that sex and business do not mix. OK, so Ross Ulbricht did tell his off-and-on girlfriend Julia Tourianski about Silk Road, and she apparently told one of her friends, who then posted about it on his Facebook wall:

I’m sure the authorities would be interested in your drug-running site.

But hey, she later became a staunch defender. Albeit after being forced to testify at his trial.

Compartmentalization with Multiple Personas firewalls between electrical gear It’s clear from my examples that pseudonymity alone is a fragile defense. Once pwned in any context, everything is pwned, because it’s all tied together. As I’ve noted, it’s far more robust to fragment and compartmentalize one’s online activity across multiple unlinked personas. Ross Ulbricht and Hector Xavier Monsegur both lacked adequate compartmentalization, over time. That is, even if their current OPSEC was good, which it actually wasn’t, there were links to past activity with pitiful OPSEC. Shannon McCoole basically didn’t compartmentalize. He was skee who says hiyas on The Love Zone, and basically the same everywhere else online.

Compartmentalization (aka compartmentation) entails the isolation of stuff in compartments. That may involve walls, physical or figurative, or just the absence of connections. The goal is preventing bad things from spreading. Limiting access and damage. For example, military aircraft (containing fuel and munitions) are prudently isolated in combat environments by blast walls aka revetments. Explosives are often stored in isolated bunkers, separated by blast walls. Firewalls are used between townhouse units, between electrical components at substations, between engine and passenger compartments of vehicles, and so on. Compartmentalization plays diverse roles in biological organisms.

And yes, compartmentalization is a crucial component of Information Security (INFOSEC) and Operations Security (OPSEC):

Operations Security sounds like something that would only concern spies and special operations soldiers. The reality is that since your government is likely spying on you, even if you have nothing to hide , OpSec concerns you. It’s a concept you need to become familiar with and begin to apply in your daily life. Maintaining Operational Security is simply the practice of taking small steps to secure the information you don’t want disclosed.

Failing to compartmentalize: It’s important enough to repeat. If someone doesn’t have a need to know, don’t tell them. This isn’t a sign of distrust, it’s a sign you are trustworthy. Remember that when you disclose unnecessary information about yourself, you are probably disclosing it about others.

From Allen Dulles:

If you have several groups, keep them separate unless the moment comes for concerted action. Keep your lines separate; and within the bounds of reason and security, try to multiply them. Each separation and each multiplication minimizes the danger of total loss. Multiplication of lines also gives the possibility of resting each line, which is often a very desirable thing. Away from the job, among your other contacts, never know too much. Often you will have to bite down on your vanity, which would like to show what you know. This is especially hard when you hear a wrong assertion being made or a misstatement of events. Not knowing too much does not mean not knowing anything. Unless there is a special reason for it, it is not good either to appear a nitwit or a person lacking in discretion. This does not invite the placing of confidence in you. Show your intelligence, but be quiet on anything along the line you are working. Make others do the speaking. A good thing sometimes is to be personally interested as a good patriot and anxious to pass along anything useful to official channels in the hope that it may eventually get to the right quarter. And from the grugq:

The cornerstone of any solid counterintelligence program is compartmentation. Compartmentation is the separation of information, including people and activities, into discreet cells. These cells must have no interaction, access, or knowledge of each other. Enforcing ignorance between different cells prevents any one compartment from containing too much sensitive information. If any single cell is compromised, such as by an informant, the limitats sic of the damage will be at the boundaries of the cell.

Now, compartmenting an entire organization is a difficult feat, and can seriously impede the ability of the organization to learn and adapt to changing circumstance. However, these are are not concerns that we need to address for an individual who is compartmenting their personal life from their illicit activity.

Spooks, such as CIA case officiers [sic], or KGB illegals, compartment their illicit activity (spying) from their regular lives. The first part of this is, of course, keeping their mouths shut about their illicit activities! There are many other important parts of tradecraft which are beyond the scope of this post. But remember, when you are compartmenting your life, the first rule is to never discuss your illicit activities with anyone outside of that compartment.

Be->Do->Have cycle OK, so how does one go about compartmentalizing with multiple personas? First, consider the standard advice for personal development. That is, after considering your principles and values, you formulate some goals. Then you consider how you would achieve those goals, what actions you would need to take. And finally, you consider who you would need to become to effectively take those actions. When it comes to implementation, however, the first step is being. Because actions grow out of being. It’s the classic Be->Do->Have cycle.

But of course, life isn’t that simple. We all live in multiple realms. Family. Social life. Spirituality. Work. Play. And these realms call forth distinct ways of being. In order to play safe online, you must distinguish subrealms, with particular interests and goals. Then you create one or more personas for each distinct subrealm. With adequate compartmentalization, adversaries don’t see you as a person, but only as unrelated personas.

Requisite skills come from fields of fiction writing, acting, role-playing games, and cosplay. Character design is a core component of writing a novel. Few personas need elaborate storylines, but language is essential, and location is often necessary. It also helps to think through each persona’s history and interests. There’s the tension between being what you know, and revealing too much about yourself. It’s also common to base characters on composites of real people. Indeed, it’s arguable that real people are composites of real people who raised and influenced them. But do avoid pwning yourself or your friends. Creative lying also helps. You may also enjoy some spiritual inspiration, such as traditional budō or something more fanciful.

OK, so names used for personas are key indicators. With good compartmentalization, each persona will only associate its own stuff, and won’t implicate other personas. But still, when developing a new persona, one of my first steps is to google the name and username. For example, I picked mirimir based on the idiomatic Russian toast мир и мир (world peace). But there was already the artist Miriam Laina, Mirimir Alvarez and میریم سفر [Go travel]. So hey.

Other key indicators are language usage and style. For example, Mirimir uses English, with traces of British and southern US vernacular. I’ve drawn some of that from experience, and some from people I’ve known and worked with. But I’ve also drawn from literature and films. For example, when using this persona, I get present to memories and associations that are based on William S. Burroughs’ escape child Kim Carsons.

I base other personas in the same way, on experience, people and fictional characters. There’s typically some fictional character, and a setting where it operates, which presence me to the persona, and help me to get in character. Some personas also use English, but with perfect grammar and extremely generic style. Other personas use various other languages, more or less properly, depending on my expertise in them. Sometimes I use offline translation apps, with local dictionaries. Online translation is rather too obvious.

Then there are the obvious indicators: address, email, and landline and cell numbers. Email is easy. Just signup via some mix of VPNs and Tor (depending on usage) and you’re good to go. It’s best to use services that only require email. But even for services that require address and telephone numbers, they only check for validity before account activation, if at all. I typically use hostels. Some services may require telephone confirmation, but you can just let them go. If it’s something you need, you can use online services that interface cellular SIM cards for texting. Or burner phones, but those are geolocation risks. At worst, using fake information, you’ll lose the account if they check. So plan accordingly.

The main goal is to avoid any association with your meatspace identity. Not by name. Not by contact information. Not by language usage and style. Not by interests. Not even by literature that you base personas on. You don’t draw on stuff that you’ve recently purchased in meatspace, or stuff that you discuss using your meatspace persona, especially online. And obviously, you must use some mix of VPNs and Tor (depending on usage) to avoid any association with your meatspace identity by IP address.

For strong compartmentalization, it’s also important to avoid associations among personas. So you use different addresses etc, and different network paths, using nested VPN chains with different final VPNs, and/or different Whonix instances. However, in some cases it’s OK to have some associations between a persona and one or more sub-personas, which are posing as that persona’s personas. Sometimes, I do that to be playful, and sometimes for purely practical reasons.

Takeaways from an interview with Lindsay Moran, an ex-CIA operative, offer useful insight:

When trying to compartmentalize, make sure your motivators of money, ideology, coercion, and ego, and fulfilled internally. Do not rely on an external resource for this. A confidentiality and anonymity (or un-attributability) win over merely confidentiality in the face of electronic surveillance. Identify the natural tendencies to shut down, or tunnel yourself into a single identity, and compensate by building personal, trusted relationships in your other identities. But even so, as the grugq notes, compartmentalization is stressful:

If the operative isn’t living a completely isolated clandestine lifestyle in their Unabomber cabin, they will have to isolate parts of their individual selves to compartment the different aspects of their lives. There will be their normal public life, the one face they show to the world, and also a sharded ego with their clandestine life. Maintaining strict compartmentation of the mind is stressful, the sharded individual will be a sum less than the total of the parts.

As if that wasn’t enough, there is the constant fear of discovery, that the clandestine cover will be stripped away by the adversary. This leaves the operative constantly fretting about the small details of each clandestine operational activity. Coupled with the compartmentalization of the self, the operative also has to stress about each non-operational activity, will this seemingly innocent action be the trigger that brings it all crashing down?

Dover Castle That’s true. But using multiple layers of personas helps protect against catastrophic failure, as noted in a guide for making anonymous online purchases:

Depending on the kind of operation, the fake identity that will be used, has to be as authentic as possible. A layered approach is used, meaning that one would create a fake online identity and completely compartmentise this identity from its real identity. This fake identity would then be used to create other fake identities. It ensures that if one fake identify gets compromised, it would not lead to de-anonymization of the person’s real identity, but instead just one ‘layer’ or ‘compartment’ of the identity protection would have been ‘peeled off’. In practice this means that created email addresses point consequently only to the email address of its previous ‘layer’ and not layers beneath its previous ‘layer’. As in other OPSEC practices, avoiding contamination and profiling between the ‘wrapped’ identities is vital.

Allen Dulles suggests an analogous approach:

When you have made a contact, till you are absolutely sure of your man — and perhaps even then — be a small but eager intermediary. Have a They in the background for whom you act and to whom you are responsible. If They are harsh, if They decide to break it off, it is never any fault of yours, and indeed you can pretend to have a personal grievance about it. They are always great gluttons for results and very stingy with cash until They get them. When the results come along, They always send messages of congratulation and encouragement. Using multiple online personas is useful for more than privacy and anonymity. It can be an expression of playfulness. And it can help you be more creative:

Pretending to be someone else: When you’re stuck in a creative process, unfocus may also come to the rescue when you embody and live out an entirely different personality. In 2016, educational psychologists, Denis Dumas and Kevin Dunbar found that people who try to solve creative problems are more successful if they behave like an eccentric poet than a rigid librarian. Given a test in which they have to come up with as many uses as possible for any object (e.g. a brick) those who behave like eccentric poets have superior creative performance. This finding holds even if the same person takes on a different identity.

Technical Implementation My focus here has been on strategy and tactics. I won’t be getting into details of technical implementation. Lately, however, I’ve written primarily about that. Available options for general Internet access are VPNs, JonDonym, and Tor. One can also use I2P, with network outproxies, but the latency is even higher than with Tor. Each has its strengths and its weaknesses. And there’s great uncertainty. Anyway, for more on those issues, see Will a VPN Service Protect Me? Defining your Threat Model and Adversaries and Anonymity Systems: The Basics .

The best bet is using personas, with data compartmentalized in some mix of hardware and virtual machines (VMs), and network connectivity correspondingly compartmentalized with nested proxy chains. See Advanced Privacy and Anonymity Using VMs, VPN’s & Tor and How to perform a VPN leak test .

An issue that deserves more attention is the compartmentalization of encrypted information. Consider how Ross Ulbricht kept everything about Silk Road on his LUKS encrypted laptop. If the FBI had swatted him at home, he would arguably have had time to shut it down. Unless agents were prepared to extract the key from RAM. But they were smarter than that. They busted him in public, and managed to acquire his laptop with LUKS unlocked. So they had everything: his diary, email, chat logs, accounting spreadsheets, personnel files, and so on. Oops.

It would have been safer to compartmentalize data in multiple encrypted containers. Enigmail (using GnuPG public-key encryption) typically works that way. All encrypted messages, including draft unsent messages, are encrypted in storage, and decrypted as needed. One can also use GnuPG for encrypting individual files, or archived folders. But that can get tedious. For general storage, one can create file-based encrypted containers with VeraCrypt or Tomb. Tomb uses cryptsetup to create LUKS volumes on loop devices, which are just files. With any file-based approach, it’s prudent to deactivate all swap devices (swapoff -a) to avoid leaving traces on disk.

Alternatively, one can have multiple LUKS partitions, with only the main one decrypted and mounted at boot. It’s easy to decrypt and mount LUKS partitions with the disk utility. Backup and recovery of LUKS partitions is more error-prone than simple file management, however. For those who compartmentalize in VMs, another option is using multiple LUKS-encrypted virtual disks. In VMs, they behave just like LUKS partitions. But in the host, they’re just encrypted files.

xkcd: $5 Wrench OK, so let’s say that an adversary has both you and your encrypted stuff. The encryption is unbreakable. And the adversary believes that you know the password(s). But you refuse to decrypt. Under some circumstances, you’ll be tortured. Elsewhere, you may be jailed, perhaps indefinitely. Even if you have truly forgotten your passwords. At borders, non-residents may be denied entry. If there’s other reason for suspicion, authorities may escalate.

If such risks concern you, you can mitigate them by physically compartmentalizing yourself from your encrypted stuff. That is, you store your encrypted stuff anonymously online. To reduce the risk and impact of loss, you can have multiple compartments, and store multiple copies of each, in different places. So you possess the minimum required for whatever you’re currently working on. However, few could remember that much information about locations, passwords, etc. But if you encrypt and store it locally, you’re faced with the same issue about refusing to decrypt stuff.

There’s an obvious solution. Encrypt the information, and anonymously store multiple copies online. But you still need to remember a few online locations, and some usernames and passwords. Some can remember that much, I’m sure. But for those that want some backup, there’s Shamir’s Secret Sharing Scheme:

In this paper we show how to divide data D into n pieces in such a way that D is easily reconstructable from any k pieces, but even complete knowledge of k-1 pieces reveals absolutely no information about D. This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces [sic, actually n-k] and security breaches expose all but one of the remaining pieces [k-1].

This is, by the way, from Adi Shamir, the co-inventor of RSA. There’s the Debian package ssss by Bertram Poettering. And, just to be clear, he notes that the scheme is provably (aka unconditionally) secure:

Note that Shamir’s scheme is provable secure, that means: in a (t,n) scheme one can prove that it makes no difference whether an attacker has t-1 valid shares at his disposal or none at all; as long as he has less than t shares, there is no better option than guessing to find out the secret.

However, with ssss you’re limited to 128 ASCII characters (bytes, which is 1024 bits). That’s enough for four 32-character blocks, each comprising:

11-15 characters for an IPv4 address or URL hint five characters for a username 12-16 characters for a password Say that you use n=10 and k=3. So now you have ten strings to hide somewhere. Each string comprises a sequence number ( 01- to 10- ) and 256 ASCII characters. For example:

01-3a33b47a4d887260…0b2950346ca889f6

02-08ec7fe42b44d5fb…a533b5add1d26016

10-a1570c913ed06cd3…48868f06b813b08c

Only three of the strings are needed to recover the original data, and two of those can be known by the adversary. To obscure the sequence numbers, you could replace 01- with a , and so on. So that gives you ten 257-character strings to hide. You might post them to discussion forums. Or tweet them. Or use Deep Sound to hide them in audio tracks, using steganography. Or print them, embed in plastic, and geocache them (using a passive ‎GPS receiver‎, to avoid pwnage). Whatever you like.