RSS 2.0 | Atom 1.0

Sign In


# Thursday, May 06, 2010
Command line tool for DPAPI

Using the data protection API, I found I needed to be able to generate values by hand that my apps can work with later. So I wrote a small command line utility to help out with that process.


dpapicmd [/user] [/decrypt] [/utf8] [/entropy:<entropy>] {/clipboard|<text>}
Text is read as Base64 bytes unless /utf8 is used. Decrypted input and encrypted output is always Base64.

C:\>dpapicmd /utf8 /entropy:dogstuff "I'm barking mad."

C:\>dpapicmd /utf8 /dec /entropy:dogstuff AQAAANCMnd8BFdERjHoAwE/Cl+sBAAAAOE510Ds5PkGpG2g7PxkgXwQAAAACAAAAAAAQZgAAAAEAACAAAACSClIQpWDawT26jRrsFr/HauG2

I'm barking mad.

Download: dpapicmd.exe (7.5 KB) (Requires .NET 2.0 because I'm too lazy to write it in C.)

And horrible hacky source: dpapicmd.cs.txt (4.25 KB)


Code | Security
Thursday, May 06, 2010 5:53:33 PM UTC  #    Comments [0]  |  Trackback

# Sunday, March 22, 2009
Using symmetric encryption to pass messages

This entry was triggered by this question. Someone asked how to use AES, and we got 2 sample classes that do it wrong. The flaw in both was that they shared the IV which means your ciphertext will can leak information. One answerer didn't believe me at first, but then got it and deleted his code. The other person got offended and said IVs, performing authentication, etc. are all "corner cases" and any problem is "contrived". So, I'm going to provide a bit of code and show two problems that arise from not generating a unique IV for each message and not authenticating the decrypted data.

First, I wrote this a while ago: What is an IV?. That describes what an IV is, and why you need a unique one for each message. Wikipedia also has good information on this. Now for the demo. I used the code at the bottom, but removed the hashing and random IV from the code. So it's just encrypting the with the same key and IV for each message -- very straightforward. Here are the messages and their ciphertext:

"Alice; Bob; Eve;: PerformAct1"
"Alice; Bob; Eve;: PerformAct2"


Notice how the first block of ciphertext is the same? All messages starting with "Alice; Bob; Eve;" will have that same first block. That means an attacker, after getting this ciphertext once, now knows if any message is addressed the same way. Very, very straightforward, basic attack. Now, maybe for a specific implementation you have in mind, this is not an issue. But it's still improperly implemented cryptography.

For the next attack, we're going to show that even with a random IV, you need to authenticate your decrypted messages. This code generates a 64-bit integer and encrypts it with AES and a random key/IV. Then, it starts changing bytes until the decrypt succeeds. Presto: the attacker was able to present a completely different value, and the decryption was successful.

public static void Main() {
    var buff = new byte[8];
    new Random().NextBytes(buff);
    var v = BitConverter.ToUInt64(buff, 0);
    Console.WriteLine("Value: " + v.ToString());
    Console.WriteLine("Value (bytes): " + BitConverter.ToString(BitConverter.GetBytes(v)));
    var aes = Aes.Create();
    var encBytes = aes.CreateEncryptor().TransformFinalBlock(BitConverter.GetBytes(v), 0, 8);
    Console.WriteLine("Encrypted: " + BitConverter.ToString(encBytes));
    var dec = aes.CreateDecryptor();
    Console.WriteLine("Decrypted: " + BitConverter.ToUInt64(dec.TransformFinalBlock(encBytes, 0, encBytes.Length), 0));
    for (int i = 0; i < 8; i++) {
        for (int x = 0; x < 250; x++) {
            try {
                Console.WriteLine("Attacked: " + BitConverter.ToUInt64(dec.TransformFinalBlock(encBytes, 0, encBytes.Length), 0));
            } catch { }

Here's an example run:
Value: 5686260040031435365
Value (bytes): 65-7A-92-1A-61-A7-E9-4E
Encrypted: F4-62-AC-02-2D-7D-43-6A-4D-97-68-4D-95-9F-8A-DF
Decrypted: 5686260040031435365
Attacked: 1603329786558177755

Since there's no authentication of the decrypted data, an attacker can just play with the ciphertext until it generates an acceptable value. Perhaps you have other mitigations in your implementation/application for this, but why rely on that?

Here's some demo code (I haven't tested it much, so it might have some major issues -- but not by design AFAIK). Note this just shows performing an encryption operation, including the IV in the message, and verifying the decrypted bytes. Other things like replay attacks are not considered. If you're trying to learn how to use crypto so you can drop it into an application, STOP, then go read enough to understand what you're doing and the implications for your particular application.

aesdemo.cs.txt (4.38 KB)

Code | Security
Sunday, March 22, 2009 11:46:56 PM UTC  #    Comments [0]  |  Trackback

# Saturday, November 29, 2008
Follow up on Obfuscation and VistaDB
Recently I mentioned how a lot of companies use obfuscation unnecessarily, and it ends up hurting legitimate customers while doing nothing to prevent "crackers". Specifically, I mentioned VistaDB, as the obfuscation tool was injecting invalid IL, causing Mono to reject the assembly.

Jason Short replied to me and I detailed the exact problems with the obfuscator (along with a few F# scripts to unobfuscate and remove the bad IL). They then released a new build with the obfuscation removed -- which Mono now happily loads.

I just wanted to give kudos to VistaDB for doing this. Not many companies are smart enough to realise that their "protection" tools are useless and do a 180 on such a stance.
Code | Security
Saturday, November 29, 2008 7:12:05 PM UTC  #    Comments [0]  |  Trackback

# Saturday, November 08, 2008
Software protection
I've been meaning to write about this for a while. It's a very simple topic, but developers get all emotional and stop being rational as soon as the magic "code protection" and "piracy" words get invoked. I'd like to say I'm not promoting copyright infringement nor saying developers don't deserve to be compensated for their work. Now that that's out of the way...

The two things most developers want to stop are unauthorized installing (license enforcement) and "code protection". Code protection is a very weak concept, mainly revolving around thinking people are gonna steal your precious algorithms. Protection is easy to deal with, so I'm going to cover that now.

Before VMs like .NET were popular, most of the code protection I've seen revolved around the code that implements the license enforcement. Developers would write all sorts of nasty-clever-clever code to make things hard for the crackers. You see this sometimes when you run an application and it complains about a debugger being installed or running. With Java and .NET, disassembly got easier. This made it extra easy to patch any license code, since the dissassembled code was in a high level language like IL. The response, and our first enemy of the day, was obfuscation.

Obfuscation takes your assembly and screws up all the metadata. On top of that, it might go and rewrite sections of your code to obfuscate the flow of the program, or perhaps indirectly load strings. The downside of course is that debugging gets really hard cause all your method names are now unreadable, reflection is broken, etc. Depending on the techniques an obfuscator uses, you can run into some other troubles. For instance, whatever obfuscator VistaDB uses is really broken, as it generates bad IL that just happens to work on MS CLR, but crashes (rightly so) on Mono. Not to mention that certain IL tricks are not verifiable, hence you can't use the code in lower-trust scenarios.

But what does obfuscation accomplish? Crackers ALWAYS win. Even the "most difficult" license system with hardware dongles and activation get cracked. The response I usually hear is "well it raises the bar". So. What. "Raising the bar" is totally pointless. Bruce Schneier talked about this.

For physical security, raising the bar is good in general. For example, if you buy a safe, it'll prevent a lot of thieves from getting to the valuables. Sure, there are higher level thieves, but you've weeded out a lot of the population around you, and the benefit is very real. Now some punk kids can't just go in and vandalize and "casually steal" your valuables.

But for computerized tech, the "bar" is the highest level attacker. If your valuable is "cracking my serial verification code", as soon as the "high level theif" cracks it, he can go write a simple program anyone can download. So the REAL bar is "user googles for a crack". That's what needs to sink past all the emotional nonsense developers go through when protecting their code. No matter what kind of complex protection schemes you put in, then obfuscate it on top of that, if the product has value, _someone_ will crack it, and all your users can just download the crack.

This isn't a maybe, this isn't a "possibly", this isn't theoretic, this is the exact reality. There is *nothing* you as a developer can do to prevent this (apart from make your product suck so much no one cares). [If there is, I'd love to hear it.]

So, obfuscation has zero value in preventing cracks, serials from getting out. And it has downsides. Just read the VistaDB blogs/forums to see real world problems only because they use an obfuscator.

What about "protecting special algorithms"? From who? If your competitors are good, they'll figure things out regardless. If they suck, they won't be able to do much with it anyways. I think the biggest threat is some overseas group disassembling your code, slapping their logos on it, and reselling it. That's a clear and obvious loss if they are making sales. But, obfuscation isn't really going to stop it, just raise the bar a tiny bit. In this case, since you're dealing with a limited number of "pirate companies" that exist for profit, perhaps obfuscating has a bit of value. But think: If someone can not know your source code, not be able to provide support, etc. etc., but can still outsell you and your marketing, perhaps you have business issues.

The one other place I hear people using obfuscation is to protect an app from "casual hacking". WTF does that mean? You mean you're afraid your sales clerk might decompile the PoS application, but give up quickly? You think it means you can safely store passwords in the binary? I'm not sure what such developers are thinking, but I'm guessing they did a poor security analysis of the situation.

As a side note, this is not particular to VM platforms like Java and .NET. Check out Hex Rays. They do a fine job *decompiling* optimized native code. I've seen it in action; it makes it easy to take any native app, decompile it, figure it out, then work with the assembly code. So these .NET devs thinking they are so leet cause Reflector messes up and hence no one can figure it out... sigh.

Finally, a nice real-live demo. Look at Spore and other games using heavy DRM and protection mechanisms. Obviously Eletronic Arts has an unlimited budget for getting the "best" type of protection. Yet the protection proved utterly useless against piracy. Just goto ThePirateBay.org and search. Yet they certainly introduced more bugs and user hate. (Of course, the REAL motive behind such DRM is killing the used games market. For this, all they need is stuff that honest users won't break.)

P.S. The reason I finally wrote all this is because VistaDB just took the silliness to the next level. I got their 3.4 Trial, but it crashes on Mono because the obfuscator emits totally invalid IL code. Their official response was that Trials arent tested on Mono. I bought the product and the "stable" builds still have the same busted IL code. Awesome protection; stopping paying users from using the software rocks!

I suppose I could understand IF they had some awesome trade secrets. BUT, they provide a source code license. So an evil VistaDB competitor just buys a source code license to get all the details. How is obfuscation helping ANYONE here? (Note the runtime has no licensing; only the developer install.)

Code | Security
Saturday, November 08, 2008 12:06:00 AM UTC  #    Comments [4]  |  Trackback

# Friday, October 10, 2008
VoIP Security - Peering
Every now and then I read an article on VoIP security. These articles almost always go over the obvious stuff such as lack of encryption, eavesdropping and ensuring you firewall your networks and so on. While certainly major issues, especially for a corporate deployment, there are still some other interesting issues.

One thing that keeps getting mentioned is the possibility for VoIP peering. Peering allows VoIP providers to send calls directly to each other (possibly over the Internet, maybe over [semi-]private connections). The main idea is cost savings, since the call doesn't need to go out over the public telephone network (PSTN).

To accomplish this, they'll set up a shared database mapping telephone numbers to VoIP providers. So, when a VoIP provider attempts to place a call, it'll consult this directory first. If it finds the number in there, it'll send it direct to the provider instead of over the PSTN. All the providers sign some sort of contract to say they'll be careful with the database and not populate it with invalid entries. Let's just assume the VoIP provider is trustworthy and hires trustworthy people (this is a stupid assumption, but I've had a peering company tell me this, as the security problems are too obvious without this assumption).

This system actually holds true inside of a VoIP provider's own network. A provider will want to terminate directly to a customer instead of out via the PSTN then back into their own network. So they'll probably have a directory of their own numbers so they can route those directly.

Well first off, now every peering member's security is bound by the security of every other member. If just one "trustworthy" peering provider gets compromised (not a hard task - more on that later), they can pollute the shared directory and hijack phone numbers. Being able to redirect a financial institution's phone number sounds like a profitable attack. An attacker can simply route the call to their system, then pass it through to the PSTN to avoid detection by users. Note that none of the security technologies available can prevent problems with a subverted, trusted, directory.

But it gets easier...
Many providers let you port your existing number to them when you sign up. From my limited experience, I've seen some of them immediately activate the number for you, so you can get started and going with their network while the port happens. A port can take a bit of time (and for now, let's assume the porting system is secure), so this sounds like a reasonable approach.

Wrong. First off, the new customer's number will probably go right into the provider's internal database, so all calls from that provider will go to the customer attacker. Depending on the size of the provider, this could be a pretty decent attack in and of itself.

But now, suppose the peering contract didn't specify not provisioning ports-in-progress, or if it did, the implementation people messed up. Now ALL the VoIP providers have been compromised, by a single provider who was agressive in their porting tactics.

Eventually it'll probably get resolved, but even a few hours or days of compromising a valuable phone number can be a significant attack.

What's the threat?
As a consumer, in general, I'd not worry too much about people trying to tap my line, just like I rarely worry about the safety of my wired Internet connection. But similar to intercepting credit card info versus hacking a company's database, this is a much juicier target. An attacker who pulls this off gets access to bulk information. Thus, I think the threat of something like this happening is much higher than having my individual calls monitored.

Security | VoIP
Friday, October 10, 2008 12:25:14 AM UTC  #    Comments [0]  |  Trackback

# Wednesday, March 12, 2008
Medical and General Security

Nothing surprising

I've been waiting for this: http://www.nytimes.com/2008/03/12/business/12heart-web.html?_r=2&ref=technology&oref=slogin&oref=slogin

Certain pacemakers (Medtronic in this case) are easy to reprogram without any useful authentication. The result is that an attacker can kill someone remotely by modifying their pacemaker.

This certainly will not be the first time this happens. The response from Medtronic is idiotic:

"To our knowledge there has not been a single reported incident of such an event in more than 30 years of device telemetry use, which includes millions of implants worldwide"

It's funny seeing industries that typically have little to no security requirements in their products get rudely awakened. Another vendor, St. Jude, says something equally scary:

"used “proprietary techniques” to protect the security of its implants and had not heard of any unauthorized or illegal manipulation of them"

Who wants to bet there's some globally shared key at work? At any rate, we expected this kind of stuff because too many people can't think clearly about security (I'll be writing about [the lack of] VoIP security soon).

A growing problem

How should these devices secured in the first place? I'm not talking specifically about pacemakers, but all sorts of implants and enhancements that we will have during the next years, using security technology today.

First, they need to be remotely monitored. This is relatively easy to secure, as the risk is considerably less: information disclosure. For example, if each monitoring device had to have it's public key explicitly trusted for a particular patient, that'd be pretty easy. In the case that a key was disclosed (say, by capturing and attacking a monitoring device), the only access gained is read-only.

Making it even less risk, it's possible that the amount of effort required for such an attack exceeds the value of the information gained. For example, if an attacker can access a target's house, they could steal identification and request medical records be sent to them.

More importantly is editing of configuration. How do we determine who has access? In theory, we want any qualified medical professional to be able to change configuration in case of an emergency. Without a global network connected to the device, the device has no way to validate credentials, particularly revocation. Additionally, even assuming that every device has access to a global database, there would be too many authorised users to ensure security. (Just like large government databases.)

Is this a threat? Some people may think this is a far fetched idea. Certainly today this is not a widespread fear. It may be a neat way to carry off a attack against a single target, but I doubt it'd be effective for major attacks. But how long will it be until a large percentage of the population carries some kind of embedded device? Pacemakers, medicine delivery systems, vision implants, hearing, digestive -- the list goes on.

The bottom line is that humans will carry more embedded technology, and this technology must be secure *and* accessible. A system where losing your private key means surgery is not usuable.

The easy solution

As far as I can tell, the only solid way to ensure security with today's technology is to add a hard link. In order for anyone to modify configuration, the configuration device must establish itself over a physical connection. This ensures no remote attacks are possible. This would take away little to no convenience -- before editing yourself, you'd have to let them physically connect a device to you.

The same could be done for remote devices. Let's say your doctor wants to adjust your body remotely. You'd simply key the remote device[s] to your doctor, and key yourself to the remote device[s]. You've established a chain of trust that's easy to clear and recreate later. There is no global database, simply yourself and devices you touch.

This mimics what you have in the real world: You trust your doctor after you establish a relationship with him. You can then call him on the phone and you trust his advice to take more or less of the medicine.

A quick note on the details: The medical devices themselves don't need hard lines to the hard configuration interface. Indeed, your "hard link" could be a special device keyed to yourself. However, embedding this device into the body means you won't lose it and it'll be readily accessible to medical teams, even if you're unconscious.

To protect against damage to the hard link device, I suppose a backup key could be made authorized. You could then store it safe, by yourself (as in, with a bank's security deposit box, not the database of the device manufacturer).

The general solution

However, this only secures us as much as we can trust the authentication. But it still relies on manual revocation and trust editing. It may be acceptable to Verisign when they accidentally issue a certificate in Microsoft's name to an attacker, but it is not acceptable for humans. Specifically, in a short vulnerability window, you could die.

The real solution, and one that we're going to need eventually across all technology, is intelligence. Specifically, a machine intelligence that determines if what is happening or what is requested is dangerous. This is the only way that we will have security moving forward.

This kind of intelligence is what we use to protect ourselves now. If the water comes out glowing green, we decide we won't trust it, even though we do trust (in general) our public water system. If you see your doctor and he recommends moving from 5mg to 500mg of Xanax a day, you'll immediately revoke his trust.

Attacks will adopt this kind of intelligence. A hacker uses a vulnerability to gain access and then attack other systems from there. How long will it be until attacking programs themselves replace the work done by the hacker?

Our software and machines will have to adopt this kind of intelligence to thwart such attacks. It will no longer be "oh, sorry you got hit by malicious code from clicking on a hyperlink, please reinstall your OS". As long as humans can be killed by the devices in use, the stakes are too high for even tiny vulnerability windows.


Wednesday, March 12, 2008 11:56:44 PM UTC  #    Comments [0]  |  Trackback

# Friday, May 05, 2006
SQL Server 2005 Reporting Services Configuration Madness

Well, after almost exactly 6 hours, I've succeeded at installing SQL Server 2005 Reporting Services on a server with more than one website.

We're running Reporting Services on separate web servers. So, after the install of reporting services, you run their little configuration tool. This of course, accomplishes very little :). See, apparently Reporting Services wasn't designed to work on a server running, *gasp*, more than one application.

If you have a decent IIS install, the default website isn't there and thus requests to http://localhost/ aren't gonna work. Reporting Services doesn't take this into consideration, and happily tries to request http://localhost/ReportServer/ even after you've specified this in the config tool. If this is your issue, you'll get a “HTTP Error 400: Bad Request“ when trying to access the ReportManager (/Reports/) website.

You'll need to edit the config files in Program Files\.....\ReportManager and ReportServer. rsreportserver.config needs to point to http://the.reporting.host.name/ReportServer in the UrlRoot element. In RSWebApplication.config, ReportServerUrl will need to have the same value. The ReportServerVirtualDirectory element must be deleted. You will get a “The configuration file contains an element that is not valid. The ReportServerUrl element is not a configuration file element. “ message. This is because the config reading code apparently doesn't fail gracefully. What it's trying to say is “the ReportServerUrl and ReportServerVirtualDirectory elements are mutually exclusive”. I'm still unsure why there should be anything besides a URL...

Around here, you might notice a bunch of DCOM errors in your Event Log (or before this point). To fix these, you'll need to go into dcomcnfg and edit the COM security for My Computer. Give the account you're using (like Network Service or “MyReportingServicesAccount“) permissions for local activation and local launch. You need to reboot for these changes to take effect (I think). But don't reboot just yet...

Finally, you end up with a 401 Unauthorized when accessing the Reports site. You might have also noticed you are also unable to authenticate when browsing the Reports or ReportServer sites from your the local server. Why?
“Windows XP SP2 and Windows Server 2003 SP1 include a loopback check security feature that is designed to help prevent reflection attacks on your computer. Therefore, authentication fails if the FQDN or the custom host header that you use does not match the local computer name.” So I'm guessing NTLM susceptible to this type of attack, and Microsoft is saving us from it. Well, it also hoses us in this case because from what I can tell, ReportManager (the thing in the Reports vdir -- why it wasn't called ReportManager by default...) needs to connect to ReportServer. It sends a request, which is denied because of the loopback protection above. A quick registry edit fixes this: http://support.microsoft.com/default.aspx?scid=kb;en-us;896861

After that... you might have a working SQL Reporting Services 2005 install! (Next up: Getting it to work with SSL...)

Really, apart from the horrible setup/configuration, it's a very very fine product. I'm actually pretty impressed. The report I wanted to setup (and the subscription so it's mailed out) only took about 10 minutes (first time I've ever used RS)! I'm just at a loss why Microsoft makes it so hard to setup. This configuration can't be that unusual. And, even stranger, most (if not all) of this configuration issues could take care of these problems. In other words, their little configuration app should automatically fix this stuff (or at least give explicit instructions on how to do so). Or maybe I just didn't RTFM that well... but this is a Microsoft product... you're supposed to just shove the DVD in the drive and click next, right? <g>

P.S., if you're getting a “Object Reference not set to an Instance of an Object“ when you add a new subscription, ensuring everything else is 100% working should make it go away...

Code | Misc. Technology | Security
Friday, May 05, 2006 6:02:44 AM UTC  #    Comments [8]  |  Trackback

# Wednesday, July 27, 2005
Secure TCP Remoting in Whidbey

I've spent a few hours trying to get the secure TCP (based on NegotiateStream) integrated security in .NET 2.0 working. While there is a page on this (Authentication with the TCP Channel), it fails to mention that you need one more property in addition to encrypt, impersonationLevel and authenticationMode. It's called “secure”, and it must be “true”. I didn't see it mentioned anywhere, except when I happened to browse the MSDN Forums: http://forums.microsoft.com/msdn/ShowPost.aspx?PostID=55225

I looked at his config, and realised I didn't have this “secure” property. Problem solved. Also, I recommend checking out http://pluralsight.com/wiki/default.aspx/Keith.GuideBook/HowToAddCIAToDotNETRemoting.html, which has a lot of information about Windows security in general, apart from some specifics of remoting and Kerberos. And, finally, yes, there's one more page where the secure attribute is listed (with some other docs) http://blogs.msdn.com/manishg/archive/2005/04/22/410879.aspx

OK, so perhaps there was some error between the user and the keyboard... but I'm very very excited to see this feature running.

Code | Security
Wednesday, July 27, 2005 2:25:15 AM UTC  #    Comments [1]  |  Trackback

# Thursday, March 31, 2005
Cracking code 5.1: Increasing your configuration
Yet another super-easy tutorial... (Revision 2 for legal reasons)

When attacking code, always look for the smallest, least intrusive change. The more you change, the more you have to worry about A: screwing something up and B: not being able to move your changes forward when the emitted code changes. Sometimes copy protection authors use encryption and likewise. Sometimes they even do it correctly. But many times, the critical path of code comes down to a single bit or couple of bytes.

I've talked about flipping branches (jumps) before. Some programs all boil down to an "if(boolean)...", in which case flipping a bit of a jmp will reverse the condition (jump if equal to jump if not equal). This results in the code always working when you enter invalid input, and not accepting valid input. But more complex code might actually depend on a bit more code, say, a variable being set to a certain number. For instance, maybe it has an "activation level", and the higher it is, the more features are enabled. In such cases, it's not feasible to go around flipping a bunch of branches.

Today's tutorial will use IDA Pro (www.datarescue.com). You can get a free demo to try out. However, if you're gonna do a lot of work with IDA Pro, it's only $439 for the full version. It even now supports cross-platform debugging (i.e., debug your Linux app on Windows), and supports .NET executables. I have to try it, but it sounds like this could be my solution to developing (debugging) on Windows for Linux. Very, very cool.

No sample program this time, since it is really easy to grasp. Lets take a theoretical program: MagicLineConverter. MagicLineConverter converts input data to output data and does some magical transformation on it. The program is configured for a set number of lines. So you can buy a 1000 line program, a 2000 line program, etc. They have some genius crypto people on staff, so trying to generate fake config files for it just isn't possible. You need to try it with a million lines, just to make sure it works, so you can get a purchase order to buy the program. So, you download the demo program, but it expires before you get a change to examine it. Now, you have zero lines configured for use.

Thus, we load the program in IDA Pro. After loading the program, you'll get a large disassembly view. Poke around, and you'll see names like “sub_8048400” and “dword_804967C”. As with any attack, you've got to start off by finding the real method and variable names. IDA Pro makes this not too hard, and offers a renaming function that allows you to rename functions as you go along. Thus, if you think a variable holds a value representing if there is network access, you can rename it to say "IsNetworkAvail" instead of remembering a memory location. If you work around for a while, you can probably reconstruct a lot of the program logic. The more you understand, the better your patch can be.

Well, when you run the program, output like this is probably sent:
Configured for 1000 lines.

Back in IDA Pro, goto the strings window. Search for that string. Double click it and you'll see something like this in the dissembler: ".data:001234 'aConfigured_for', 0Ah, 0. On the next lines, you'll have information like "; DATA XREF: sub_001400+E". IDA is telling us where this string is referenced. If we go there, we'll probably see something like this:

push ds:dword_0A240200
push offset aConfigured_for ; "Configured for %d lines.\n"
call printf

By now, we're probably almost done. We've found where some code is that reports the total lines the program is configured for. Somehow, this routine knows where to get the data, or the data is passed in. Since there's a dword being pushed and printed, it's safe to assume the count is stored there. Click that dword, and press 'n' to rename it. Enter a good name, such as 'possibleCount' or 'printedCount'. When the copy protection is good, there could be multiple levels of indirection leading up to printing something critical like that. Thus, using tentative names that reflect what you are certain of helps if things get more complex down the road. You can also rename the routine to something useful like "printCount".

Now, we want to see whereelse this variable is used. IDA Pro has a feature that lets us see all references an item. In the disassembly, right click our renamed variable, and select “Jump to xref to operand” (or just press x). A dialog is shown that has different instructions using this memory. Look for ones that look like initialization. Here's two common examples:

mov ds:printedCount, 0
mov ds:printedCount, ecx

The first one first. Highlight that entire line (mov ds:printedCount, 0). Then switch to Hex view. You'll see something like this highlighted: C7 05 34 12 00 00 00 00 00 00. Since it's a dword, there is 4 bytes to represent zero. Modify any of them to a value of your choosing. (thus changing mov ds:printedCount,0 to ,1000000). This patch can be as small as a single bit if your choose!

But wait... sometimes GCC won't generate a “mov something,0” to initialize it. In some cases, for some processors, and certain optimizations, it'll use a register for initialization. In such case, the disassembly might look something like the second case:

; Somewhere deep in the program
mov ds:printedCount, ecx ; After critical processing

Now we have to find out where ecx is initialized. It probably won't be too far away. If we're lucky, there will be a mov ecx, 0. However, optimized code probably won't emit that. Instead, it might have:

xor ecx, ecx

xor'ing a value against itself will always produce zero, and “xor ecx, ecx“ takes up 3 less bytes than “mov ecx, 0“. The xor is only two bytes (0x31c9). Two ideas: First, fill it with nops. Depending on the value in edx, this might work and give us some amount of licenses. However, that might not work: ecx could be zero already. Fortunately, we can address a single byte of ecx with this: mov ch,0xff. This moves ff into the high part of CX, which is the low part of ECX. That instruction generates only two bytes (0xb5ff), so it's a great replacement for the xor opcode on the same register. Assuming ECX is zero, that one byte will now make it have the value 65,280.

In both cases, it's only a two byte patch. You can distribute the patch with a simple offset:value -- 9 bytes of ASCII text. Sorta hard to stop that, and anyone could patch just from their own memory.

Moral of the story: Write obfuscated code or use a post-compile processors that will mixup your code for you. If your code is cracked by changing a single bit... that means it's just protecting the honest :). While 100% protection is never possible, it should be a lot harder than allowing a stray gamma ray to crack your code!
Code | Security
Thursday, March 31, 2005 4:26:10 AM UTC  #    Comments [1]  |  Trackback

# Tuesday, March 29, 2005
Security: Windows vs. Linux: Another comparison

Apparently this was recently published: http://www.securityinnovation.com/resources/linux_windows.shtml

To summarize, RedHat Enterprise Linux 3 had 132 security issues (with a minimal configuration), whereas Windows 2003 had 52 for calendar year 2004, *when configured as web servers*. This includes a webserver (Apache/IIS), app platform (PHP/ASP.NET), and DB (MySQL/MSSQL). Only issues fixed in 2004 were counted.

A few points:
 - They took a default install of Windows 2003, stating that it's too hard to get rid of stuff like IE. Thus, any patches applying to Windows2003 were included, regardless of if they could be exploited or not. This of course affects Windows' rating.

 - Same for RHEL. RHEL installs a lot of stuff that might not be in use and not exploitable. I'm guessing that what accounts for the very high numbers on RHEL. Then again, it's a fair comparison for average users (like myself, who just installs RHEL/Windows out of the box and doesn't really screw around with a lot of stuff).

 - However, assuming super-competent admins on both platforms, I'd expect the exploitable vulnerabilities to be close to zero on both platforms. I.e., if admins took precautions to install patches quickly as well as lock down services/systems as soon as a vulnerability was discovered. However, that's not realistic at all, and that's why a study that just takes a standard install is needed.

 - They used MySQL on RHEL. While this might be correct since people use it... MySQL is junk. Seeing as how it could be barely considered a DB and how poor it is overall, I wouldn't be surprised if MySQL accounted for a large amounts of vulnerabilities.

I think the study should have broken down where the vulnerabilities were in the product. Not knowing what was the fault of IIS, or MySQL, etc. makes it hard for people to compare the products for their own usage.

The study also mentioned the “Days of Risk“, i.e., from when the vulnerability was first publically reported to when it was fixed. RHEL will always have an instrinsic disadvantage here. Since most issues are related to open source, it's harder to do private reporting.

Second, there are vulnerabilities in Microsoft software that are fixed, but never reported. For instance, IIRC, the “GIF Integer Overflow” problem that was found after some Windows source was leaked was fixed in newer versions of IE/Windows, but never reported (until the source was leaked). I also know that from personal experience, you can report a bug to MS, and if you don't go public with it, they'll roll it up in an SP or next release. These issues are just [almost] intrinsics of open vs. closed source.

Some might say, “Oh no, there are issues in Windows 2000 that aren't publically published!“, but the same exists for RHEL. The difference is that some of these “private“ issues can get fixed in newer versions without ever becoming public, while in open source, it is much harder to do so.

Now, some people are up in arms since it was not disclosed that the funding came from Microsoft. Bruce Schneier, for instance, is saying that people will just ignore the results and focus on this possible bias. That's BS. Since the methodology is published, it's not exceedingly difficult to recreate the results. People should do that instead of bitching about who funded the research. My guess is that people who are satisfied with the results don't care to go recreate them, and those who aren't are afraid that they'll find the same results and thus have no argument.

Tuesday, March 29, 2005 2:00:22 AM UTC  #    Comments [2]  |  Trackback

# Tuesday, February 01, 2005
So will kids these days just continue helping the US government?

CNN has a story about american high school kids who don't know what free speech is. (Thanks BoingBoing!) 

Wow. Double wow. Are kids really this clueless? Are they really such idiotic sheep? Through an intense, multi-year study* that I've done, I know that many kids are idiots. But now they're just gonna go and screw themselves over? Maybe these kids LIKE CSS and Region Encoding? Perhaps the MPAA are visionaries and are actually marketing to these people?

Sigh... I'm frightened by the attitude and lack of critical thinking I see in most adults in the states these days. I'm surprised that most americans do not know what made their idea of government any good. Here's a hint: It's not poor cars and bad food. The USA started out as a good idea because it had a government that was built to limit itself. These days, people just think it's about capitalism, immoral behaviour, and whatever other base thing that comes to mind.

The thought of these children growing up, and from an early age thinking that the government HAS or SHOULD HAVE more power... that's simply chilling.

Misc | Security
Tuesday, February 01, 2005 2:56:20 AM UTC  #    Comments [3]  |  Trackback

Why not to use Bellster

So, Pulver launched a great new marketing campaign called Bellster. People are hyping this up as “Peer to Peer telephony”. I'm tired of P2P being abused as buzzword. The entire freaking Internet is a peer to peer system. But that's not what I really care about. People are joining up to Bellster without thinking what it means. There are two primary problems with Bellster.

1. *Most likely* your phone company has it outlawed, since you are reselling your service. In some countries, this might even be illegal, and in violation of local laws, in addition to your own contract. There is no such thing as “unlimited” calling (except perhaps, inside a certain network). If you go over what your telco thinks is acceptable for “unlimited” calling (somewhere between 1000-5000 minutes probably), you'll get charged, or cut off, or something. Other telco's might notice your calling pattern has significantly changed. If you use your phone normally, and then all of a sudden it jumps to 4 times volume and calls a wide range of numbers at a wide selection of times... software can flag that down, and you can get your line cut (it's called bypass). This will depend on each telco/country. Then again, maybe you hate the telco and want to stick it to 'em. If you get away with it... good for you.

2. It's all fun and games 'till someone gets hurt. (And then it's fun for one less person.) Sooner or later, someone's going to make bad phone calls via Bellster. The problem is that these phone calls come from YOUR phone line. So, when the SS investigates the latest terrorist threat, and finds it came from your line... ouch. I'd expect nothing less than a personal visit. Depending on how that goes... good luck. In the USA, I can only imagine what would happen. Sure, eventually you will probably get cleared and be OK. Meanwhile, are you willing to risk being imprisioned, questioned, perhaps having your computers confiscated, etc. etc.?

In light of those two things, who on Earth would use Bellster? My local calls are more money than what I pay to call half the world with VoIP (yes, even at my commercial, retail rates, not wholesale carrier rates). So *I'm* not going to share my line to call Canada when I can already do that for very cheap (not to mention that if I did share my line, within a month or two it'd be cut). Plus, I'm at the whim of whoever is running the service. I doubt the service level is gonna be that great.

So... potential risk... zero benefit... why would I do this? THINK people, THINK!

Misc. Technology | Security
Tuesday, February 01, 2005 1:34:00 AM UTC  #    Comments [0]  |  Trackback

# Saturday, January 29, 2005
How I want computer security to work

I hope the days of running arbitrary CPU instructions to perform every single task come to an end soon.

I hear people complaining about how MS doesn't make them secure enough. I hear from the other end (i.e., the pros) that we have to have user education. I read about parents having to filter their kids' computers, ensuring they don't run malicious code (not “bad content“, such as pro-Bush propaganda, but code to take over a PC). People run anti-virus software. People are now running Anti-unwanted-commercial-software programs. Heck, in some cases, there's even Anti-anti-spyware code out there.

We hear about having to “ensure we trust the source”, as in, “do I trust Bob to send me a web site link”? Not even a program, *just a link*! We have the “don't execute attachments” and “don't install code from websites”, on and on and on. Some people even think there should be a “Internet drivers license” or even some sort of basic PC user training/license.

This has got to stop. It's been shown that we'll never be able to get average people to make correct trust decisions. It's also stupid to want to do that. If someone writes up a cute “Flying Bunnies.exe” game, I WANT to be able to run it, without worrying that it's some kind of attempt to hack me.

.NET gives us the first level. We have code access security, which can ensure that certain code running can't do certain things. Next, we need an OS that takes this home.

It looks as if we'll be having a little girl this May. By the time she's old enough to have her own real PC, I hope these things will be an issue of the past. When I got my first computer, I was 5. I was already somewhat familiar with DOS; I knew my way around. How different would that have been, had I have to understand a full set of security and trust related data? How much slower would I have gotten into things if it had to be accompanied by a ton of overhead just so that I wouldn't get hacked?

If Microsoft embraces managed code fully (and it looks like they are), this should not be hard. Managed programs should just run. Get an email attachment? Just run it! See a cute game that needs rich UI controls from the web? Should be automatic. Only when an unmanaged EXE comes along should we run into roadblocks. Indeed, any program requiring trust should require us to login as admin (or elevate to admin) and allow it.

So, in about 5 years, I hope to be buying a nice little PC for my child. I want to flip it on, use biometrics as her password, and LET HER PLAY dammit! If she finds a bunny program, I want her to be able to run it. Now, I'm hoping my kids will follow after me and understand computers enough to make those decisions for themselves (heck, and for other people :)), but I sure don't want that to get in the way.

The same applies to pretty much everyone else (yea, I'm saying a lot of users aren't much more advanced than a 5-yr-old). We can't expect people to make security decisions. We simply MUST have a way for things to get done, without security implications. I think at this stage, this is entirely possible.

Misc. Technology | Security
Saturday, January 29, 2005 10:12:26 PM UTC  #    Comments [4]  |  Trackback

# Thursday, January 13, 2005
Primer on Encryption in Spanish

MVP Patrick MacKay down in Chile has finally gotten his Spanish primer on encryption up on the MSDN site. Check it out here: Desmitificando la Encriptación (Parte I). Not to boast or to brag, but I drew the little face that's used to show off the cipher modes :).

Thursday, January 13, 2005 5:53:55 PM UTC  #    Comments [0]  |  Trackback

# Thursday, December 30, 2004
Newest spyware and popups brought to you by Windows Media

It appears as if Microsoft's Windows Media DRM protection sucks in yet another way. Some evil people are using Windows Media files to open popups, which then try to confuse users into installing spyware and so on. I can imagine that perhaps this is even by design (when you try play protected media, it wants to send you to a website so you can purchase a license).

Some companies are now trying to trick users into downloading these files, and then take advantage of the extra confusion since the Windows open from WMP (”What the... I have to click this? Huh? Must be related to this new Windows Media Player...”).

While this “hole“ isn't *that bad*, since, AFAIK, all it does is fire up a browser (ok, that can be pretty risky, depending on the circumstance, and perhaps it can easily be used to escalate?), why is this even happening in the first place?

  1: Microsoft builds DRM into it's media system, even though no users are asking for it.
  2: Microsoft then turns ON these features by default -- features that connect to arbitrary sites without the user doing any action remotely related to Internet access.
  3: User gets burned, and some crafty devil-developers are happy.

How is this good? If MS would just wake the hell up and do what's right, instead of continuing to cater to media executives, we'd all be a lot better off.

Thursday, December 30, 2004 10:55:54 PM UTC  #    Comments [0]  |  Trackback

# Wednesday, December 22, 2004
What have I done? Patrick gets cracking

Patrick Mac Kay, a Chilean MVP, gets into IL cracking fun, en español. He says he wasn't sure how to do it before, but after my quick tutorial, had “enhanced” a program to handle more data. What have I done? :)

Wednesday, December 22, 2004 4:22:46 AM UTC  #    Comments [3]  |  Trackback

# Wednesday, December 08, 2004
Running Windows as non-admin, Gnome style

MVP Valery just wrote a cool little utility to assist people running as non-admin. A little key icon that sits in your notification area, and allows you to escalate your privs. Similar (in some ways) to how Gnome handles running admin things. Very nice.

Wednesday, December 08, 2004 12:19:10 PM UTC  #    Comments [0]  |  Trackback

# Thursday, December 02, 2004
Are kids these days really so helpless?

I came across this program, called “Hector Protector”, created by the NetSafe Programme of New Zealand. It's to “help keep kids safe online”. What does this program actually do? It puts an image of a dolphin on-screen. Kids who run into materials that frighten them should click the dolphin. At that point, a congratulations message and picture of a dolphin fill the screen, protecting the poor child. The idea is that kids can do this and then run and find their parents or teacher to help them with the bad things on the computer.

Are kids these days really so helpless that they need a bloody dedicated program just to hide a window? I've been using computers since before I can remember. I never needed a system to hide stuff from me. I was on BBSs since I was 8 or 9 or something. Hell, when I was 13, my friend and I ran a BBS, complete with an “elite” section of programs, images, etc. He even worked as a sysop for other places, checking out all uploads and adding descriptions. He didn't need a stupid program to keep him safe. Why is it that kids now have turned into (or people think they are) such wussies when it comes to computers and networks?

Also, what's wrong with “If you see something wrong, minimize the window and go get help.”? Are kids going into such a bloody panic they need a damn dolphin there to click on? They're so offended and frightened they can't hit the minimize button? Also seems like a missed opportunity to teach keyboard shortcuts (say, Win+D). Or, what's wrong with just standing up and going to get help?

I'm not against helping kids deal with things. But technology isn't the answer. That's what parents and teachers are there for. Providing crutches like this? Please.

And... what happens when kids stuble across bad animations of Hector doing things he shouldn't? Won't this confuse and scar kids even more? Or what happens if kids happen to stumble upon some dolphin + redhead footage? Just think how many kids' lives are been wrecked by trusting hector, only to find he scares them later!

Misc. Technology | Security
Thursday, December 02, 2004 5:01:34 PM UTC  #    Comments [1]  |  Trackback

Security FUD: Internet Security Foundation

Security sells quite now, and lots of companies like to cash in by making up fake security threats, and then selling a “solution“. One such company is the “Internet Security Foundation“ which is just a clever marketing name for “Some Lame Company Trying to Sell Free Tools“.

When you goto the site (InternetSecurityFoundation.org), they make a big deal and a fake security alert from Sept. 2004 that you can see the text in a textbox, even if Windows renders it as asterisks. Anyone who programs understands this. These people pretend it's some kind of new threat and that terrorists are using it over the Internet to rob bank acounts. What a load of crap!

Why do they do this? They want to sell you “SeePassword“ (SeePassword.com), a $20 utility to do the same thing as the free Glow Password Recovery Util (download: Glow.exe (14.5 KB)) -- or similar programs, which have been around for YEARS.

The REAL issue lies in each individual program passing around passwords in plaintext. If a password is sitting in a user's memory space, in plain text, then why is it a surprise that it can be seen? Oh wait, it's not a surprise. This company is just using security for marketing.

Oh, and interesting info on their domain name registration. Perhaps I shall give them a call.

   KMGI Corp.
   119 72 St., 339
   New York, New York 10023
   United States

   Registered through: GoDaddy.com (http://www.godaddy.com)
      Created on: 29-Oct-04
      Expires on: 29-Oct-05
      Last Updated on: 29-Oct-04

   Administrative Contact:
      Corp., KMGI  ak@kmgi.com
      119 72 St., 339
      New York, New York 10023
      United States
      17032427114      Fax -- 12122024982
   Technical Contact:
      Corp., KMGI  ak@kmgi.com
      119 72 St., 339
      New York, New York 10023
      United States
      17032427114      Fax -- 12122024982

   Domain servers in listed order:

Edit: Fix .com to .org (Although both appeared to be registered by the same thing).

Thursday, December 02, 2004 1:04:21 AM UTC  #    Comments [2]  |  Trackback

# Sunday, November 28, 2004
Cracking Code 4: Replacing a strong name

In my last article, someone commented that editing an assembly would create a problem if the assembly is strong named. They are correct. If an assembly has a strong name and is tampered with, you'll get a System.IO.FileLoadException: Strong name validation failed for assembly <foo>.

Strong names are to identify an assembly. They are "strong" because the identification is provided with cryptographic means, rather than just the name of the file. The system is designed to ensure the assembly is what it claims to be, and public key cryptography proves it. Against malicious people, it can ensure someone can't drop an assembly signed with one of your trusted publisher's keys and get you to trust their assembly more than you should. It's NOT meant to be a way to stop people from editing and running assemblies on their own machine.

I was hoping there was a simple way to replace the strong name on an assembly, but I don't believe there is. Then again, there's a LOT of stuff that ships with .NET, so perhaps I just overlooked it. If so, let me know. At any rate, I wrote a tiny program to replace the strong name on an assembly. Let me explain it.

Somewhere in the assembly, a public key is provided (otherwise the runtime wouldn't know what to verify against!). Then, there is a hash of the assembly, and the hash is signed with the private key. When the assembly is modified, the hash will change, the signature will no longer match and the runtime will refuse to load the assembly. A cracker usually won't have access to the private key, and thus can't resign. However, one can simply replace the public key in the assembly with our own public key, and resign using our own private key. Problem solved.

A quick word to those who are thinking "Can't I just use SN -Vu to skip verification checking?". No, this doesn't work. Verification skipping only applies to partially (delay signed) assemblies, not to fully signed assemblies. If you somehow manage to get verification skipping working on fully signed assemblies, I'd love to know.

My program is a very simple tool with nothing amazing in it (except for a very slow search algorithm). All it does is take an assembly and a keyfile, replace the public key, and call SN -R <assembly> <keyfile> to resign. Here's how you'd use it:

1. Take Some.exe, a strongly named assembly. Modify it.
2. Note that attempting to load Some.exe will fail.
3. Create a new keyfile by running "SN -k mykey.snk". (SN is the StrongName utility that ships with the .NET Framework SDK).
4. Ensure you have the .NET Framework SDK (bin) in your path.
5. Change the public key and resign via "SNReplace Some.exe mykey.snk".

That's all. You can run "SN -Tp Some.exe" before and after to see that the public key has indeed changed. "SN -v Some.exe" will verify things are in order.

Download: SNReplace.exe (16 KB) Source: SNReplace.cs.txt (2.72 KB)
Code | Security
Sunday, November 28, 2004 7:20:21 AM UTC  #    Comments [12]  |  Trackback

# Friday, November 26, 2004
Cracking code 3: Cracking an obfuscated .NET assembly

It's been a while since I wrote anything that interesting, so I figured for Thanksgiving, I'd go ahead and do so. Merry Thanksgiving. The first article in this “series“ is here.

Cracking .NET programs can be just like cracking any other program. In this article, I'm going to use the same approach as I did last time. I threw together a quick little program called CrackMe2. CrackMe2 has a really cool feature called “Reverse Text”, however, it's only available to registered users. What's a poor boy to do?

First, we try registering. Since we don't have a valid code (we don't even know what one looks like), we get an “Invalid serial.“ MessageBox. OK, so now we know that the program does something when we click a button, and if the serial is wrong, we get a MessageBox.

Darn, 123 didn't work.

Well, the first step in cracking is defining our target and it's location. Our target is the code that's deciding to say “Invalid serial.” instead of “You're registered!”. Where's the “bad code“ that needs to be fixed? Well, with a .NET assembly, our first information is gained by taking a look with IL DASM.

View of the obfuscated CrackMe2 assembly

Oh no! It's obfuscated (thanks to Ivan Medvedev's Mangler). Let's assume this is a big application and that we'll never find what we're looking for just by going through the IL. Just by glancing at the hierarchy, we don't know that much more than when we started: There's a form with code.

Seeing past the names
Now certainly, we can do static analysis and try to find out where the bad code is. One way would be by getting the strings (Ctrl+M in IL DASM, scroll to the bottom), and then grep the IL for ldstr , and work from there. In fact, that's a pretty quick and easy way to locate certain parts. However, lets pretend the strings are encrypted/dynamically generated, and that's not viable. So, let's start debugging.

[Michael@MAO C:\]$ cordbg CrackMe2.exe
Microsoft (R) Common Language Runtime Test Debugger Shell Version 1.1.4322.573
Copyright (C) Microsoft Corporation 1998-2002. All rights reserved.

(cordbg) run CrackMe2.exe
Process 4488/0x1188 created.
Warning: couldn't load symbols for c:\windows\microsoft.net\framework\v1.1.4322\mscorlib.dll
[thread 0x1510] Thread created.
Warning: couldn't load symbols for C:\CrackMe2.exe
Warning: couldn't load symbols for c:\windows\assembly\gac\system.windows.forms\1.0.5000.0__b77a5c561934e089\system.windows.forms.dll
Warning: couldn't load symbols for c:\windows\assembly\gac\system\1.0.5000.0__b77a5c561934e089\system.dll

[0004] mov         ecx,98543Ch

cordbg is a command line debugger that ships with the .NET Framework SDK, and it's just loaded the CrackMe2.exe and related assemblies. Just like before, we're going to go ahead and set a breakpoint and find out where we are in the program, and work from there. So, let's breakpoint the MessageBox.Show function. We use IL-similar syntax to specify the function name: NameSpace.ClassName::Method.

(cordbg) b System.Windows.Forms.MessageBox::Show
Breakpoint #1 has bound to c:\windows\assembly\gac\system.windows.forms\1.0.5000.0__b77a5c561934e089\system.windows.forms.dll.
#1      c:\windows\assembly\gac\system.windows.forms\1.0.5000.0__b77a5c561934e089\system.windows.forms.dll!System.Windows.Forms.MessageBox::Show:0      Show+0x0(native) [active]

Then, we tell cordbg to go until it breaks by typing go. The form comes up, and we enter a serial number: 123.

(cordbg) go
Warning: couldn't load symbols for c:\windows\assembly\gac\system.drawing\1.0.5000.0__b03f5f7f11d50a3a\system.drawing.dll
break at #1     c:\windows\assembly\gac\system.windows.forms\1.0.5000.0__b77a5c561934e089\system.windows.forms.dll!System.Windows.Forms.MessageBox::Show:0      Show+0x0(native) [active]
Source not available when in the prolog of a function(offset 0x0)

[0000] push        edi

Bingo, we're stopped at a MessageBox. We want to know who called this function, since most likely, that will lead us to the critical code section we need to fix. So, we ask cordbg where are we?

(cordbg) where
Thread 0x1510 Current State:Normal
0)* system.windows.forms!System.Windows.Forms.MessageBox::Show +0000 [no source information available]
                text=(0x00ad5854) "Invalid serial."
1)  CrackMe2!CrackMe2.Form1::AAAAAAAAAAAAAAAAAAAA +0070 [no source information available]
2)  system.windows.forms!System.Windows.Forms.Control::OnClick +005e [no source information available]

9)  system.windows.forms!ControlNativeWindow::OnMessage +0013 [no source information available]
--- Managed transition ---

We see what's expected. Somewhere in Win32 code, a message was sent, and we see the OnMessage called and bubbling up all the way to the Control::OnClick, and then user code. We can look at all the arguments along the way, and that's useful for more complex scenarios (say, when a registration function calls another passing the serial number or validation code).

At any rate, we've got something to go on: The name of the function that calls the MessageBox: CrackMe2.Form1::AAAAAAAAAAAAAAAAAAAA (20 A's). We're done with cordbg (quit). Our next stop is to read the bad code.

Looking at the bad code
Using IL DASM (see above), I navigate to the CrackMe2.Form1::AAAAAAAAAAAAAAAAAAAA method. Inside is relatively straighforward code. First, there's a try/catch that has an Int32::Parse call in it. The result is stored in local 0. So we now know the code is numeric. Immediately after the catch handler, we have this snippet:
  IL_0022:  ldloc.0
  IL_0023:  ldc.i4.1
  IL_0024:  and
  IL_0025:  ldc.i4.1
  IL_0026:  bne.un.s   IL_0035
  IL_0028:  ldarg.0
  IL_0029:  ldstr      "Invalid serial."
  IL_002e:  call       valuetype [System.Windows.Forms]System.Windows.Forms.DialogResult [System.Windows.Forms]System.Windows.Forms.MessageBox::Show(class [System.Windows.Forms]System.Windows.Forms.IWin32Window, string)

Load the local (the number entered), then load the number 1, and AND them. Then, load one, and if they are not equal, jump to IL_0035. If they are equal, execute the following instructions, which quite obviously say “Invalid serial.”. AND'ing a number with 1 and comparing to 1 is a check to see if the number is odd. So, at this point, we can write a keygenerator that produces... even numbers. A keygenerator is always preferred to a patch, however, generally speaking, finding the algorithm might be a bit harder. Then, there's always the possibility that the check actually does something hard to fake (i.e., uses RSA or talks to a hardware dongle/web service). So, let's go on and patch this code.

At IL_0035 (the target of the branch if the number is even), we have some code that does activation work and then proceeds to say “Thank you...”. Simple sample. Now, let's make the fix.

Simple Patching
With IL DASM and IL ASM, we have a really easy way to make patches. Simply run ildasm /out=CrackMe2.il CrackMe2.exe, and IL DASM will dump all the IL required for that assembly to a nicely formatted file. All we have to do is goto the bad method and fix up the IL. I think the most unintrusive fix would be to add “br IL_0035” to the top of the method. That would branch immediately to the good code, and the product would activate on any serial number entered.

However, some obfuscators try to stop IL DASM round tripping, and that might stop some posers in their tracks. The IL obfuscator I'm going to give away for free will do this, for example. (Actually, my free obfuscator would make this tutorial a bit harder because of how it handles names -- we'd have to actually get a token instead.)

Assuming we can't use IL DASM/ASM, what can we do? Use a hex editor.

Binary Patching
When we can't reassemble an entire program, we can patch certain opcodes instead. Tools like OllyDbg have a built-in assembler so we can easily make patches to the x86 code. For IL, I'm not aware of any such tool. Another issue with binary patching IL is that we have to ensure the resulting IL is fully correct and is able to be JIT'd to native code. If our patch ends up screwing with the IL in a way that makes it incorrect, we'll get a runtime exception from the execution engine. Let's try to create a binary patch that jumps from the beginning of the method right to the good code, at IL offset 0x0035.

First, in IL DASM, turn on “Show bytes”, under the View menu. This allows us to see the actual bytes that make up the opcodes. Now, lets look at the beginning of the critical function:

  // Method begins at RVA 0x2434
  // Code size       78 (0x4e)
  .maxstack  2
  .locals init (int32 V_0)
    IL_0000:  /* 02   |                  */ ldarg.0
    IL_0001:  /* 7B   | (04)000002       */ ldfld      class [System.Windows.Forms]System.Windows.Forms.TextBox CrackMe2.Form1::AAAAAAAAAAAA
    IL_0006:  /* 6F   | (0A)000026       */ callvirt   instance string [System.Windows.Forms]System.Windows.Forms.Control::get_Text()
    IL_000b:  /* 28   | (0A)000027       */ call       int32 [mscorlib]System.Int32::Parse(string)
    IL_0010:  /* 0A   |                  */ stloc.0
    IL_0011:  /* DE   | 0F               */ leave.s    IL_0022
  }  // end .try

This code is protected in a try block. We could go and remove the try block, but that's modifying more code. Generally speaking, we should aim to patch as little code as possible to ensure we don't accidentally screw something up. So, we're going to deal with the try block and fix it from within. The ECMA specifications for .NET will come in handy here. Specifically, Partition III, CIL. This can be found in the .NET Framework SDK folder, under “Tool Developers Guide\docs”. It's also available from MSDN, here.

The first instinct is to say, hey, let's change IL_0000 to a br to IL_0035, and NOP out the remainder of the try block. However, that'd create illegal code, since you can't branch out from a try block, you must use the leave opcode instead. So, let's rewrite the method to simply leave to IL_0035. Here's the description of the leave opcode:

The leave instruction unconditionally transfers control to target. Target is represented as a signed offset (4 bytes for leave, 1 byte for leave.s) from the beginning of the instruction following the current instruction.

The formats (in hex) are DD <4 bytes> for leave and DE <1 byte> (as shown above), for leave.s. We'll use leave.s, just to be efficient :). Since the total size for leave.s is 2 bytes, we calculate the offset to 0x35 from 0x02 (since our leave instruction is at 0x00). Subtraction tells us we must have an offset of 0x33. Hence, our leave instruction in hex looks like: DE 33. Since that'd leave the IL in an incorrect state, we must nop out the rest of the try block. The hex for nop is 00.

Open the assembly in your favorite hex editor, and let's find the method. IL DASM gives us the RVA, but for now we'll just search for a specific byte sequence. The IL DASM Show bytes allows us to easily find our place. Do note that the way tokens are displayed ((04)000002, for example), is reverse from how they are stored. Depending on the size of the app, you might need to search on quite a large number of bytes, since IL sequences are most likely repeated. For this case, we're going to search on the last bit: “0A DE 0F”. No other matches found, so this is the one.

As when programming, in cracking we have many ways to solve a problem. Many of them can be considered “right”. We could make a simple one-byte patch by allowing any number as a valid serial. This has the merit of ensuring the local int is assigned, and well, being only a one-byte edit. The leave.s opcode is at offset 0x11, so add 2 to that amount and we get 0x13. 0x35 - 0x13 = 0x22. So by changing “0F” to “22”, we'd have our crack. However, let's stick to the original plan and jump right to the good bits from the beginning.

In the hex editor, we back up a bit until we find the 02 7B 02 00 00 04 part (ldarg.0, then load the textbox field). At the 02, we drop our leave.s IL_0035 payload, which is DE 33. Then, we nop out (00) everything until the end of the 0A DE 0F part. The resulting hex for the try block is thus: DE 33 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00. Save the file as CrackMe2.cracked.exe.

Run the program. Type in anything for the serial. “Thank you for registering.” The second textbox activates. We've won access to the coveted “Reverse Text” function. Write up an .NFO, ensuring to remind people to purchase software to support the authors. Then kick back and play a game of KSpaceDuel.

Download the program itself (Right click and save as, since it's a .NET assembly and IEExec will try to run it otherwise): CrackMe2.exe (24 KB). Or, download the source: CrackMe2.cs.txt (4.81 KB).

Was this post interesting, helpful, stupid, or lame? Leave a comment and help me improve.

Code | IL | Security
Friday, November 26, 2004 5:22:20 AM UTC  #    Comments [13]  |  Trackback

# Tuesday, November 16, 2004
Why DRM for purchases is stupid and pointless

Digital Rights Management has been and will continue to be a hot topic for a while. On the one end we have the MPAA and the RIAA who are stuck in the early 1900s, and intelligent consumers. In the middle, we have people like Microsoft, who have to try to satisfy both ends of the scale. Then, there's some lesser companies that make DRM (like MacroVision) who even beyond the MPAA and RIAA, in the sense that they try to propogate the need for their useless technology.

Why is DRM bad? Because it hurts the customer. It takes the flexibility and usefulness of a technology away. It's anti-innovation. No one wakes up in the morning and says “Hmm, I'd like to pay money to do less than I can do for free.” That's exactly what DRM does for consumers.

Some people defend DRM, saying that if there was no DRM, then people would copy things left and right and collapse their industry. Apparently these people have never heard of eMule or Kazaa. Crackers and pirates are going to bypass whatever system you have installed anyways. Except for simple protections (say, an easy-to-use activation system that doesn't require an Internet connection), putting extra runtime checks in that interefere with operation makes things worse for your customers.

This doesn't mean you shouldn't encrypt your binaries or run them through an obfuscator. It means you shouldn't have software that polls in the background for debuggers that might be running, or secret checks on the CD to ensure it is legitimate.

For instance, take my post about stupid copy protection like SafeDisc: Here, a legit customer is suffering from the stupidity imposed by the corporation: You MUST SafeDisc all releases. In fact, me, a legitimate customer, had to turn to getting a pirate crack to be able to use the software I purchased. Had I pirated it to begin with, I'd have never run into trouble. In fact, check that link out, and look at all the search referrals. A LOT of people are having the same problems. The solution? Don't pay, just get a crack. Again, DRM messing things up.

Same thing for some of those dongle-based protection systems. If the software is worth it, it'll get cracked. However, legit customers don't get a crack. So, when their dongle fails, they get rather annoyed. Ask some Autodesk/discreet customers about that, and I'm sure you'll hear some great stories. Nothing like shelling out $$$$$ to get a top-of-the-line system, only to have your software say “Hey, you don't have a dongle. Theif! Call and buy our software!” a day before deadlines.

So, besides pissing customers off, does it hurt companies? Well, yes, and quite directly. An average user, say, someone with an Autodesk product, might not go into cracks, thinking every crack download has a virus and whatnot. They might not want to/care to/be able to install them. However, when the company FORCES the customer to figure it out (i.e., to meet your deadline, or to copy some music in time for a party, or to just bloody use the software you paid for), that customer now KNOWS how to work the pirate scene. The customer sees that well, no, cracks down erase your hard disk and delete your work while infecting every machine on the network with a virus. In fact, in some cases, things might work even better (like a SafeDisc game that pauses the game every few minutes to search for the CD).

Now what? Well, you've taken an innocent customer, and forced him into piracy once. Next time he needs 1 more license, he's got one less reason to purchase it. Next time there's a choice between running down and buying a DVD, or downloading a rip from the net (and avoiding region issues), there's less incentive to buy. “Average Joe” customers who would never have used a crack before (even if they wanted to), now might go ahead and do that. And recommend/show to their “Average Jane” friends.

And unlike earlier MacroVision stuff that protected analog tapes (ever try to copy a rental to VHS?) that required small $10 hardware cleaners to fix, things in the digital domain and on the Internet don't require any special hardware. Installing a crack can be as easy as 3 clicks. Deprotecting content can be done with a single click. Hell, Windows even asks me to decrypt DVDs when inserted in the drive -- no clicks required if I so desired!

I wonder how long it'll take people holding IP to realise that working WITH their customers instead of treating everyone like the devil will help them. It seems pretty obvious to me and everyone I've talked to. I wonder why it's so hard for them?

Misc | Security
Tuesday, November 16, 2004 2:30:31 AM UTC  #    Comments [0]  |  Trackback

# Wednesday, November 10, 2004
Some open source people say sending patches by email is OK (bad security ahead...)

BroadVoice released a patch for Asterisk that fixes some issues with SIP registration. They hired people and made a commercial patch. Way to go.

Then, they decided to *email* it to customers. Yes. In 2004. A company emailing patches to customers. Apparently they didn't think this was dumb. No link to their web site, no secure download from their website, nothing. In fact, the email was signed “The BroadVoice Team”, which is the signature I remember seeing on a few virus emails.

So, I responded to the Asterisk-users mailing list about this patch, saying how it was utterly ridiculous to do this, as it teaches customers to not be secure and go blindly installing stuff. Here are some of the comments I got back (and they aren't sarcastic either!):

“the patch is pure c code. it took me 5 mins to read & understand it. is very simple (but useful).
Simply that patch (apart from adding some logs, comments and little code formatting) simply caches auth data AND let * manage 403 responses from the server, and this last one perhaps is the issue that was overloading BV .
so, just read it (or let someone do for it) and understand that's not a problem :)“

“I don't see a security issue with his method. If you (a) read the entire patch and (b) comprehend fully everything that it does, then there's nothing to worry about. Fear comes from the unknown, and if you know everything in the patch, there's nothing to fear. “

“To claim that someone opens a security hole by accepting a verified patch via email, is the same as claiming that you never have a security hole just because you download from "trusted" sites. Webservers can be hacked, you know. And not every buffer-overflow will lead to a security issue -- many just crash the system. “

So, I think this goes some way towards showing that all is not well as far as security mentality in open-source land. I pointed out to them that “even Microsoft does it right” :). Didn't seem to make me popular.

Thinking that you can just read the code and be set is equivalent to saying there should never be any security holes in any code because people will just read and know. Add to the fact that what you're combating is a possible *malicious* security hole, not just an accident, and I think most devs would pass things right over.

Code | Security
Wednesday, November 10, 2004 11:57:11 PM UTC  #    Comments [0]  |  Trackback

# Thursday, November 04, 2004
Purchase thwarted by DRM

Windows decided not to load 2 of my hard disks (yes, I'm buying a RAID 1 setup tomorrow -- yikes!), so I wasn't able to access my music collection. I can't work at all without some sound. So I found a shared folder from KCeasy and lo and behold, it had a few interesting things in it, so I queue'd them up.

Well, one track was by Ayumi Hamasaki, and I quite liked it. So I thought, hey, I'll go buy the CD. $28, not that bad. Oh wait, what's this?

Shopping Note:
· This CD is copyright-protected. The tracks on the CD can be played on PC running Windows operation system, but cannot be copied onto any PC nor can they be played on Macintosh operation system.

Can be played on a PC, but can't be copied? Huh? That means they do Some Very Evil Stuff. If a CD can't be copied, then something seriously wrong is going on. Now, the only thing I've heard of is it that lame technology that puts a driver on your system to screw things up and then gives you access to WMA only. I think it comes from a company that has the word “Sun” in their name. The one that you can bypass by disabling Autoload.

Well, I don't play CDs, period. My playlists are huge, I'm using my DVD drive for other things, and I hate the idea of passing physical media around for no reason. I also despise any company that tries to covertly install drivers to destroy my computer.

So, what's the outcome here? Well, I'm not gonna buy the CD. The artist loses $3? Oh no. I'll still get the music (gonna queue it up right now), and if I like it, I'll be a fan and if I happen to be around where a concert is, perhaps I'll go. But as far as paying for locked down media? Screw 'em. In fact, if I'm going to PAY for the media, I'd like them to ship a professionaly encoded set of WMA tracks at 256Kbps along with the CD audio. Actually, heck, just send me the WMA files. I don't need CD-audio. Send me higher quality WMA files (higher sampling, higher bit depth, WMA Pro codec, lossless compression). Oh yea, and get rid of the lame attempt to use DRM. Then I'll buy.

Misc. Technology | Security
Thursday, November 04, 2004 4:25:20 AM UTC  #    Comments [0]  |  Trackback

# Monday, October 25, 2004
Missing the point: 1,064-bit encryption
If you don't get Crypto-Gram, or don't subscribe to Bruce Schneier's blog, do so. Today he posted a little gem about a county buying voting machines, who is detailing their decision to use a certain vendor. One great reason: “Uses 1,064 bit encryption, not 128 which is less secure.” People actually trust these people to run their elections? On another note, they mention they can put the machines in junior high to encourage voting. From what I remember of the u.s. education system, isn't that up to about 8th or 9th grade? If you're 18 and still in junior high, there's probably more pressing issues than voting... or maybe those people who grow up and select voting machine vendors?
Humour | Security
Monday, October 25, 2004 3:33:42 PM UTC  #    Comments [0]  |  Trackback

# Friday, October 15, 2004
MySQL is really secure... or bad.

I chose MySQL to use as my database, since I was writing on Linux, in C, and it just seemed like the easiest path. Can someone please say “you were so wrong”? MySQL has to the worst DB engine out there. It doesn't (ok, just added) even have support for SUBQUERIES! Barely has support for multiple charsets. And... binary(20) is NOT a binary field 20 bytes long. It's a char(20). You can't execute multiple commands in a single query. It's embarrassing to open source really. I don't know who could argue that MySQL is competition for SQL Server or Oracle and keep a straight face. Check this list out: http://sql-info.de/mysql/gotchas.html (I really love the part about date handling.)

On the other hand, it's very secure. www.kalea.com.gt <-- No checking of user input whatsoever. (BTW, my little article about Kalea made me a top search result for Kalea Guatemala -- while their site doesn't even show up.)  They take your querystring, concat it to their query, and off it goes. But guess what? Good luck trying to hack it. MySQL is so poor, doing SQL injection and achieving anything fun is nearly impossible. So much for adding prices to their site :). Oh wait, you can do a DoS by using the BENCHMARK expression and then encode/Sha1/etc.

So what am I going to do? Switch to SQL Server as soon as I get a release candidate done. I'm going to load Mono into my C app, and then transition into managed code and use some nice TDS libraries and have a good day with a database that actually works well. Had I done that to begin with, I'd be a few hours ahead of schedule instead of behind schedule...

Code | Humour | Misc. Technology | Security
Friday, October 15, 2004 4:18:53 AM UTC  #    Comments [2]  |  Trackback

# Tuesday, October 12, 2004
Turing image generator for ASP.NET

Today I was coding a site, and I realised I needed an easy way to avoid automatic signups. So, I did what everyone else does: added a Turing image. Since I was coding in ASP.NET 2.0, I thought it'd be nice to try out the new ASIX image generator type page.

It's pretty nifty. Nothing that you couldn't do with an ASHX in about 5 minutes, but still pretty cool. What I like is that the template starts you off right where you can start coding against the Graphics object. This will definately make entry much easier for people who aren't as comfortable with these classes. In the past I've normally been against things like this (i.e., a whole set of code just to save some minor work for one specific case), but I think this was a pretty good thing to add.

Download the code here: Turing.cs.txt. This is for ASP.NET 2.0 -- just create a new ASIX and point it at the Turing class. But, it should be pretty simple to hook it up into ASP.NET 1.1. If anyone seems interested, or somehow I get more free time, I'll post the required ASHX handler. Anyways, from ASP.NET 2, all you need in your main page is this code:

string nonce = Turing.GenerateNewNonce();
"turingNonce"] = nonce;
this.turingImage.ImageUrl = "~/Turing.asix?nonce=" + Server.UrlEncode(nonce);

Then, to verify (say, in a validator) just do:

Turing.Verify((string)ViewState["nonce"], myTextBox.Text);

Just be sure to set EnableViewStateMac to true (otherwise someone can set the “nonce” to something known and render the system ineffective).

Note, I originally wanted to use a nonce system, but instead ended up using a simple encryption. So, it's possible to record the output of an image once (via the querystring data) and store it for later use (until the ASP.NET app restarts). I also use the Random class instead of the RNGCryptoServiceProvider.

As well, since I only use 5 capital roman letters, some basic AI should be able to defeat the algorithm. Add more letters, lines, change colours, etc. to make it stronger. There's some commented code that adds a dark gradient background. Playing around with this could make it harder for AI, at the cost of making it hard for your users.

I realised that the way things were, an attacker could request the image multiple times, and get a different output (since the noise is random). This could be used to run a couple of extra passes on the same code, and increase the accuracy of AI against it. Or an attacker could request the code enough times to get an image that isn't that distorted and attack that.

The fix is to seed the random generator with something we can calculate from the nonce (to ensure it's the same image each time), and something the attacker cannot know (so he can't just run our code and see where the lines are). I do this by encrypting the nonce, and taking the first 4 bytes as a seed for the Random class. At 5:33am, this seems solid enough to ensure the numbers are not known to the attacker.

Here's the updated code: Turing2.cs.txt

I think I'm going to A) Add some image transformations to 'warp' the text somewhat, and B) really create a nonce system, instead of just relying on a simple encryption.
Code | Security
Tuesday, October 12, 2004 1:19:43 AM UTC  #    Comments [0]  |  Trackback

# Sunday, October 10, 2004
MPAA/Security silliness strikes Miraflores mall

I went downtown to the newest mall built in Guatemala: Miraflores -- yet another example of a design that'd make anyone with any amount of architectural sense sick. Built by the bright people over at spectrum.com.gt. At any rate, being somewhat bored, I decided to watch a movie. The theatres in the new mall aren't that bad.

As I walk into the mall, I see a very interesting sign: No pets, guns, cameras or video cameras allowed. While I can understand the first two items (although, seeing a rabid Akita hunting people in a Gap would be amusing), what crackhead came up with the new [video] camera idea?

At the information desk, I verified that indeed, they did mean no cameras allowed. What possible premise? Security. Apparently taking photographs of public places is somehow a threat. So I pushed a bit more... “How exactly does this improve our security?” “Um... hmm... uh, I think there was a problem at another mall, so they're just doing it in case.” In other words: “no freaking clue”. I also asked if they check people for cell phones, since you could have a camera phone and covertly take pictures. She assured me they'd find people doing that and confiscate their phones.

Later on I find out that the cinema has a $500 reward (which is probably 2x the monthly salary of the people working at the cinema), for finding anyone recording the movies. At the beginning of movies, they play a stupid commercial about not to pirate movies, and compare it to stealing a car (again showing how spaced out the MPAA is). They actually have people with night-vision scoping the audience out during the entire showing.

Now, I'm aware that they do this in the states. The stupid part is that in the USA, movies come out before you can buy them on DVD, download DVD-rips (ok, not always), or rent them at your local movie rental store. Not so in Guatemala. The movie industry is quite backwards, and releases shows much later in different parts of the world (hence their retarded DVD region coding crap). Well, by the time a movie hits Guatemalan theatres *there is no market for screeners of that movie*!

I selected one movie to watch, but my sister told me they had rented it two weeks ago. Others I had seen in theatres in the USA or downloaded DVD-rips of months ago. Some were even at Blockbuster, less than 1km away. All of them are readily available by street vendors (in your choice of VCD or DVD). Yet they still find it necessary to go to extra lengths and “prohibit” cameras to stop this huge screener racket. Silliness. I'm sad to think that some of the population here might A) actually believe them B) not be offended that a company tries to take away their freedom to carry a camera around.

In the sake of prosperity for the country, I'm planning some fun with these people:
1: Photograph and chart the entire mall.
2: Post pictures and schematics here. [For added bonus, mark up the schematics with writing in a script they don't understand.]
3: Distribute flyers at the mall with a URL; email Spectrum.
4: Enjoy the response.
1: Get some empty rolls of toilet paper or other cardboard items.
2: Add a red LED to these items.
3: Distribute at the theatre.
4: Watch employees go nutty thinking they're going to get $50,000 in reward money.
5: Have even more fun when I refuse to surrender my cardboard box.

Just need to find the time...

Guatemala | Humour | Security
Sunday, October 10, 2004 10:50:57 PM UTC  #    Comments [5]  |  Trackback

# Wednesday, September 29, 2004
VeriSign makes it easier to pose as a child online
i-SAFE and VeriSign announced their new product for kids: a USB device that acts as a smart card with the cute name of “i-STIK“ . Apparently the problem of people posing as children online to later abduct them, or perhaps just get a thrill out of pretending to be 12 again and talking with kids, is very large. So the plan is to authenticate all kids online. VeriSign says adults posing as kids will stick out “like a sore thumb“, since they won't have a USB key/device/card/stick. What's wrong?

Well first, it won't work. There'll still be tons of kids without the cards, so it's dubious that other kids will stop talking to non-carded kids. Apart from that, software support is still non-existent. Last time I checked, IRC didn't offer a way to use a smart card. All sorts of communities would have to adopt this system. Also, it's “owned“ by i-SAFE and VeriSign, meaning that implementing the system comes at a benefit only to those companies.

Will the system allow kids to send S/MIME email? Half the people I know can't verify my signed email or have no clue what it was. One person (who works for a telecom company) got so confused about my signed email that he couldn't figure out how to foward the message (no idea which mail client he was using). And suddenly, i-STIK is going to solve all these software and end-user problems? Yea right.

The claims made on that page are so utterly ridiculous: “...empower our youth with the key to unlock safe doors on the Internet...“ and “...I am pleased that i-STIK technology will protect children from Internet predators...“. But these quotes show the lack of understanding and complete trust people are putting in this system. And this is where it gets bad.

Since this will be touted as “100% secure“ and “perfect“, (much as SSL is touted by cert-selling companies), the true issues will be ignored. Just like in biometrics, failure can be quite devastating, not because of the technology, but because of the trust placed in it. There are millions of kids in the states. That's a lot of tokens. And somehow, VeriSign is going to ensure that tokens aren't correctly issued? Remember, VeriSign is the company that couldn't even stop themselves from issuing fraudulent certificates in Microsoft's name. And now they're going to issue tokens to kids? Issuing a token to a child is harder, since this is supposed to be an “anonymous“ system -- i.e., no personal data of the child is stored.

So what happens when tokens end up in the wrong hands? Well, parents, children and teachers are taught to implicitally trust the tokens in whatever form they manifest themselves (an icon next to the person's name in the software?). Thus, when an attacker has a token, he can freely impersonate any child he wants, and even assume multiple childish identities (due to tokens being anonymous). Now instead of having usual caution when the attacker makes a move, everyone trusts that it's ok, “since the little kiddie icon is there“.

Fortunately, the system will probably fail due to other reasons, so we won't need to worry about this. But if it somehow succeeds (through clever marketing)... beware. The money going into such system would be much better spent on education for kids, parents, and teachers. If your child is going to happily run off with someone they met online, no amount of technology is going to save him/her.

Press release: http://www.verisign.com/verisign-inc/news-and-events/news-archive/us-news-2004/page_016237.html
Wednesday, September 29, 2004 9:13:23 PM UTC  #    Comments [1]  |  Trackback

# Wednesday, September 22, 2004
TCP Throttling Support

Had to handle my first support incident from XP SP2's great bug ^H^H^H^H feature that is TCP throttling. Somewhere, MS started listening to Steve Gibson when it comes to security. So they turned off RAW socket support in XP SP2 and added TCP throttling. TCP throtting was added late in the game (I'm pretty sure it was at RC1 or later).

While there's no real reason to do these things, MS claims it adds security, because when a virus runs, it's absolutely impossible for it to use it's own driver or get around “safeguards” like this, right? Sigh... MS usually had well thought out security measures, always keeping in mind if malicious code is running as admin -- it can do anything! At any rate, XP SP2 limits the number of pending TCP connections to 10. Yes, 10.

More than security, it sounds like MS wanted to cripple P2P networks, as a 10 pending connection limit certainly does hurt many implementations. For instance, with eDonkey. I request a file, and get say 300 sources. I'll need to contact each source and get added to the queue. Well, 300 sources * many files = LOTS of connections needed. Since many of the sources could be slow to respond (throw in high latency connections (ever use a satellite?)), or simply offline and timeout, the 10 connection limit gets hit within seconds (I have eMule set to 512 connections max, with 128 per 5 seconds).  Even the defaults are high enough to hit this silly limit.

So today I get a call saying that Outlook won't contact my email server, and after this, the have to reboot their computer to access the Internet. After a bit of chat, I figure out it's XP SP2 being “helpful”, but limiting this guy's network software. The solution? Tell him to google for a hacked TCPIP.sys that gives him unlimited connections. (I'd love to post it here, but I think it'd be a legal issue. Maybe instructions on how to patch your TCPIP.sys file would be OK... At any rate, use google. Also, Neowin had a file in their forums for unlimited connections (other patches increase it to only 50)).

Great job -- forcing average users into downloading cracked system DLLs just to get basic functionality. Oh yea, and not accomplishing anything regarding security either. Fun.

Wednesday, September 22, 2004 9:06:56 PM UTC  #    Comments [0]  |  Trackback

# Thursday, July 22, 2004
Birthday attack in C#
How strong is a 128-bit hash? If you are looking to avoid collisions, the answer is not 2^^127, but 2^^64. Why? Due to the birthday paradox. Wikipedia says: “Specifically, if a function yields any of n different outputs with equal probability and n is sufficiently large, then after evaluating the function for about √n different arguments we expect to have found a pair of arguments x1 and x2 with f(x1) = f(x2).” The name “birthday“ comes into play because this holds true in a group of 23 or more people, chances are about 50% that two of them will share a birthday. The actual formula is Sqrt(n) * 1.2.

For a hash function, where strength is measured in powers of two, it's simple to calculate. For the exponent (128), just divide by two. So, we have 1.2(2^^(128/2)), but for most purposes, we leave off the 1.2 and just say 2^^64.

This means that if you're trying to find a collision, say, when attacking a digital signature system, the hash strength is considerably weaker than it sounds.

This sample program (Birthday.cs.txt (4.49 KB)) demonstrates this in C#, against a 32-bit hash (the first four bytes of MD5). Type in two messages, and it will find a collision by overwriting the first for chars of the message with random data. The code is not as clean, and it's definately not optimized for performance. That said, the 32-bit hash is successfully attacked in about 2.3 seconds on my machine (3GHz P4).

How effective is this attack? Very. It's extremely easy to modify most document formats these days. Pretty much every document has some place where you can insert or replace “hidden data” -- things a user or system do not see or process. For instance, in HTML, you could simply add the collision data inside an HTML comment. In a plain text file, you could modify spacing, tabs, and perhaps some other punctuation. It wouldn't change the meaning or validity of the document, but it allows you to generate enough combinations to find a collision.

After finding two colliding documents, you send the “original” to the victim, who then signs it. Then you take the good signature and substitute your “bad” document -- presto, a fake signature.

How can you prevent this? One way which might not always work is to modify a document before signing it. The real way is to use a hash long enough to provide the level of security you need. If you want “128-bit” security, in the sense that someone needs 2^^127 or so processing power to break it, then use SHA256. If for some reason you only have shorter algorithms at your disposal, a possibility is running the hash function again, with modifications to the document (for instance, switch every two bytes). This would give you a longer output.
Code | Security
Thursday, July 22, 2004 9:29:51 PM UTC  #    Comments [1]  |  Trackback

# Sunday, July 18, 2004
AV makers are lame, but this takes the cake!
I got this press release forwarded to me via an MVP mailing list. I couldn't stop laughing! It's from a software vendor (Airscanner.com) who makes AntiVirus products for Windows CE devices: Smartphones, Pocket PCs, etc.  They're proudly announcing the first virus for WinCE, amidst so much FUD, it's funny! What's funny? Take a look:

1: They paint WinCE as the last hope and salvation of Microsoft.
The Windows Mobile operating system is heir apparent to the Microsoft dynasty.  Microsoft knows the desktop and server OS market is saturated. There is no room for growth. And even as we speak, Linux erodes its market share.  How can Microsoft save itself?”
”Heir apparent”? I see... nope, no more shipments of WinXP or 2003 server will be going out, that's for sure. In the future, everyone works on tiny devices with relatively small processing power and storage, running a miniature OS. Windows Embedded is never used because that'd make too much sense. Welcome to the alternate reality where Airscanner lives.

2: They make silly claims about how “insecure” WinCE is:
“But there is a problem. Security is the biggest threat to Microsoft's survival. With its Trustworthy Computing initiative splintering under the pressure of weekly vulnerabilities, Microsoft would surely protect its most favored offspring. Right?
Wrong. Microsoft left its golden child naked and shivering. Windows Mobile has almost no security architecture whatsoever. It is wide open to attackers;“

WinCE is used on portable devices like PocketPCs, Smartphones, and MP3 (excuse me, WMA) players. What “security measures” should it have? It's a single user device you keep in your pocket. “Wide open“ Yep, just like my toaster, blender, VCR and DVD player are “wide open” for attackers. However, they do quickly go on to lavish praise on WinCE (since they're trying to make money off of it).

3: “Unfortunately, Windows CE was designed without security. Worse, handheld devices are now the easiest backdoor into a corporate network. “
Come again? Raise your hand the last time your Windows CE devices executed code under your domain account, on a domain computer. I don't see any hands. Raise your hand the last time your WinCE device executed ANY code on a corporate machine. Still no hands? WinCE adds no more risk to a corp network than already exists. Just more FUD.

4: Their terrorizing virus doesn't do anything. It prompts the user, “Can I spread?” And then it proceeds to “infect” files. They play this as a “proof of concept”. Ok, what exactly does it do? Because it sounds very much like a program *that writes to the disk*! That's it folks. It writes to files on your devices memory. If you're wondering what's scary, don't ask me. I guess the idea is to say “Basic IO works in WinCE! Run for your lives, arrg!” They portray this as a proof of concept. Well, Microsoft has these proofs of concepts around for a while. They're called Build Verification Tests.

5: The virus writer (which I'm guessing was paid for by Airscanner) writes:
“This is proof of concept code. Also, i wanted to make avers happy.The situation where Pocket PC antiviruses detect only EICAR file had to end …”
He WANTS to make the AV companies happy. I see. So, some guy takes his time to write a virus that doesn't do anything malicious, and only spreads on demand, and mails it right to the AV companies, *just to make them happy*? OK...

Even better, apparently there are only two things their software checks for. This means that anyone can write an AV in about an hour. And they want $29 for this product. Well, I guess if they sold 5 copies, that'd work out to $145/hour for them, so that's not that bad, eh?

6: The people from this company apparently can't write a simple algorithm.
“If the file has been infected, it will be marked with the word “atar” at the offset 0x11C. This is used during the infection process to see if the file was already infected. Without this check, the virus would keep re-infecting files over and over until the device ran out of memory.“
Mind you, this is the AV company, not the virus writer. They apparently believe the only way the only way to prevent an infinite loop on a set of items is to modify each item, “otherwise it'd run out of memory.” Are they truly saying there's no other way to do this? Sure sounds like it.

7: Even though it's low risk, they wanna play up the potential:
Note, however, that in the lab we were able to easily bypass these protection checks by making small changes to the virus binary. There is nothing to prevent malicious users from doing the same and repackaging this malware as a Trojan.“

Repackaging it as a Trojan? Excuse me? The virus doesn't DO anything. Maybe they meant “by rewriting everything“ instead of “making small changes to the virus binary“. Anyways, these things *don't spread*. Even if they tried to make it spread, it'd be very hard. The reason is because you don't usually copy EXEs around from one mobile device to another. You usually have a installer or host management system that handles this for you. If I want to give you a game, say DiamondMine for PPC, I don't copy files from my PPC to yours. I give you the DiamondMine installer, which runs on your Windows XP machine and that installs the game on your device.

For it to really spread, maybe it could email itself around. Of course, the steps would be: Get the email. Rename attachment (since EXE files are usually blocked). Copy to PocketPC device (since Pocket Outlook doesn't download attachments by default). Run file. You might as well just call the user and say something startling, causing him to drop the PocketPC. It'd do more damage that way.

Users beware: Desperate companies will make up whatever garbage they can to scare you into buying fake security products. Save your money and buy yourself a soft pretzel instead.
Humour | Security
Sunday, July 18, 2004 5:47:01 PM UTC  #    Comments [0]  |  Trackback

# Friday, July 16, 2004
Must read: Microsoft Research DRM talk

Before you form another stance on DRM, read this briefing. Cory Doctorow presented this talk to Microsoft last month. Cory's exactly correct about DRM. He talks about exactly WHY *I'm not* going to buy any more DVDs or CDs until someone fixes the technology. Very excellent article; a definate must read if you're working with anyone in contact with DRM.

Misc. Technology | Security
Friday, July 16, 2004 12:45:01 AM UTC  #    Comments [3]  |  Trackback

# Thursday, July 08, 2004
InvisiSource Beta Shipping - Win an Xbox

Well, after quite some time, we've finally sent out the first beta of InvisiSource. It's an encrypted loader/obfuscator that I've been working on for quite some time. The reason it's been taking so long is that when we approach obfuscation, we try to make the obfuscation break as many rules as possible, to make the code even harder to reverse engineer. Unfortunately, it's quite easy to break too many rules and end up with something that won't run in every scenario. Over the past while, I've discovered many tricks that'd throw quite a screwball at a potential cracker. Unfortunately, the conditions on them make them unsuitable for every app. Other factors that took a while: debugging encrypted code and obfuscated code is, by design, hard :).

Anyways, we're going to be giving out Xbox systems to the top three beta testers (which is a good amount, considering the size of the tester pool). So, head on over to www.invisiSource.net and sign up!

IL | Misc. Technology | Security
Thursday, July 08, 2004 7:08:41 AM UTC  #    Comments [1]  |  Trackback

Safe or Secure

Two things that are often confused are safety and security. Aren't they the same? Well, no. The difference can be quite subtle in some cases, and not-so-subtle other times. Understanding the difference will help you see each for what it is, and not get a false sense of security.

Being safe means that you are free from harm via accidents. Being secure means being free from harm via attacks. What's the difference? Engineering a safe building might mean that it won't fall over if 100 extra people get in it, or if an earthquake occurs. Designing a secure building might mean that it won't fall over if someone fires a missle into it, for instance. The sandals I just bought have safety features (non-slip soles), but I don't expect them to be secure against someone leaving caltrops out on my balcony.

Most things we encounter in daily life are designed for safety. Even things sold for “security” are sometimes designed more with safety in mind: consider a can of mace. The models I've seen are designed so that it's actually a tad more difficult to fire them, as well as being weakened for “civilian” use. As civilians, we're under much more threat of accidents than attacks. I'm more worried about some drunk driving a car into me than an assassin waiting to run me down. This is a good thing: in real life, being secure from attacks is quite difficult.

Kidnapping/robbing/killing/whatever someone in most places is easy. The threat of punishment and an effective enforcement is what acts as a deterrent. In places that lack enforcement, say, where I used to live, such things occur much more frequently, not because they are any easier to do, but because there's no penalty. My dad was kidnapped down there, and almost executed. Our neighbours had something similar happen, but they weren't so lucky :(. However, even there, a bigger worry is a bus like this, this, or this.

However, in computer systems, the equation does not hold up. Users can delete their own documents. Or pour coffee on their keyboards. However, when connected to a network, you can get attacked from around the globe, millions of times per second. People are being blackmailed simply via sending a single email. And electronic attacks, unlike physical attacks, are usually harder to prosecute. If you have worked in IT for a while, you probably have a story or two where you could have made quite some money by breaking a law or two (my favourite is the bank that called me for some work: they had one network with all their data on it. This network also had an NT4 machine running Exchange, and was directly connected (just a router) to the Internet.). These attacks would have been much harder in the physical world. In the real world, the bank should probably worry more about clearly marked emergency exit lights than someone driving a car through the wall.

Deciding how safe or how secure a system should be becomes very difficult. A classic example: data backup. On one end, we want our users to quickly recover from any problem. However, each backup copy made introduces yet another item to be secured. Your data would be safe from accidental deletion if you burned a CD with it and mass-mailed it ala AOL. However it wouldn't be very secure. You can secure it be encrypting everything with no unencrypted backups and a single key, but if you lose the key, your data remains secure, but not safe from loss.

How should a file delete function work? Safety says that the file shouldn't be wiped, just marked as deleted (or in the Windows Recycle bin case, just moved to another folder). Security says the area where the file is should be at least zeroed out.

AntiVirus software (for instance) is almost completely a safety product. It helps stops a user from accidentally running something bad. It does nothing if someone deliberately crafts an attack at them. It'll detect if a 10-yr-old installs NetBus on your machine. It won't do anything if a 16-yr-old first plays with the NetBus executable with a hex editor.

”Disabling” VBS and WSH scripts on your computer doesn't really increase your security. It just lowers the safety problem of someone accidentally clicking a script that is known to be harmful. It won't help if someone compiles that script into x86 and throws a .exe extention on it. On most modern PCs, there is no secure way to run arbitrary code (although managed code/virtual machines should alleviate this eventually).

People who place trust in these fake-security measures are being deceived by safety measures. It works because real-world counterparts are hard to come by. While this is good for the makers of such software, it can be devastating if it's not taken for what it actually is. For instance: If your computer is infected with some virus/trojan/whatever, cleaning it with AV software is *not* secure. The only secure action at that point is to re-install (or at least verify) that the entire OS and configuration is correct. For all you know, the trojan could have modified the Windows' kernel, the AV interface, and everything else.

Fortunately, many times safety software won't actually hurt your security. Just running AV in a proactive mode doesn't make you less secure. It's the improper use and faith in this software that's dangerous. So, as always, getting a secure system can be really difficult. This is just one more potential pitfall to watch out for.

Thursday, July 08, 2004 6:15:51 AM UTC  #    Comments [0]  |  Trackback

# Tuesday, July 06, 2004
So, what's an IV?

If you've dealt with symmetrical algorithms, such as DES, 3DES or Rijndael, you're probably aware that you must supply a key and and IV to encrypt/decrypt. If you're not aware of this, you shouldn't be writing code that works with cryptography :). Everyone knows what the key is, but what's the IV? IV stands for initialization vector. IVs are used to “jump start” the cipher stream. Not clear? It helps to understand how to look at a cipher.

Think of a cipher as a random mapping from a piece of plaintext to a piece of ciphertext. Most modern ciphers are block ciphers: they work on n-bit blocks of plaintext at a time. Thus we can imagine a cipher such as Rijndael (which uses 128-bit blocks) to have a huge dictionary: one entry for every possible plaintext and it's corresponding ciphertext. In reality, there's not that much memory available, so instead the ciphertext is computed.

So lets take a sample message: “Hi Bob, how are you?” We'll split that into blocks: “HiBob HowAr eYou?”. With a particular key, the ciphertext might be “LaAHz IAtXm LyJxr”. Everything's nice and safe. Now, let's send another message: “Hi Bob, game?” This becomes “HiBob Game?”, and ciphertext “LaAHz KozhW”. Notice a problem? Since the first two blocks have the same plaintext, they will have the same ciphertext. If an attacker knows the format of the message, he can start to guess the first part of our messages (since “HiAlice” and “HiEve” would have different first blocks). This can get worse.

Imagine that the messages are orders, and the first block is the item number, the second the price, and the third the quantity. Now an attacker can determine (say by entering an order and looking at the output -- called a chosen plaintext attack) which ciphertexts correspond to which items/prices/quantity. Modification of the messages can be stopped by a digital signature algorithms. But what about reading? Enter the cipher mode.

The cipher mode I've been describing is ECB, Electronic Code Book. It's exactly as it sounds -- basically a big lookup. Each block is processed by itself. As shown, this isn't very secure for most applications. The most basic improvement is the CBC mode. (There are other modes as well, but CBC works for this article.)

CBC stands for Cipher Block Chaining. CBC takes the ciphertext of the previous block, and XORs it with the current plaintext block before encrypting it. Thus the ciphertext block for “10000” won't always be the same, but it'll depend on what the preceeding plaintext is. So, the message “12345 10000 29500” will have completely different ciphertext than “54321 10000 29500”.

So, using the previous block is easy, but what about the first block? This is where the IV is used. The IV is the “previous encryption“ for the first block. So when we encrypt “HiBob“, we're going to first XOR “HiBob“ with our current IV.

IVs are not sensitive. You do not need to hide the IV. Many times, a unique message ID is used as an IV, since many applications require a unique ID anyways. It's perfectly fine to send along the IV as the first piece of ciphertext. Thus, we read the first block, and use that as the IV when decrypting. This makes managing the IV very simple, since it's right there with the message.

However, just remember to never reuse an IV! If you reuse an IV, it defeats the purpose, since the benefit of the IV is negated. Any given plaintext will always be the same with a given key and IV. But since IVs aren't sensitive, and easy to manage, this shouldn't be an issue.

Tuesday, July 06, 2004 4:19:29 AM UTC  #    Comments [0]  |  Trackback

# Sunday, April 11, 2004
How to protect your Windows NT hashes

So I've been worried that the NT password hashing calcuation is: MD4(passwordInUnicode). Yes, that's right. No salt or anything. As you might be imagining, this is bad. I was wondering how this can be mitigated, short of extra physical security (smart cards, for instance). I found that there is a way to cipher the passwords on disk: SYSKEY.

SYSKEY is running by default on Windows 2000+ machines. Basically it encrypts the password hashes with RC4, meaning the attacker must break the RC4 encryption. However, by default, SYSKEY runs in Mode 1, which stores the RC4 as an LSA secret, so it's trivial to get it out. So, if someone has physical access to your machine, SYSKEY doesn't do much.

However, there are additional modes. These allow you to use a password to derive the RC4 key. The password must be entered when the machine starts up. The other mode generates a random RC4 key, and stores it on a floppy disk. The floppy must be present when booting.

To enable these, just run SYSKEY (Start -> Run: Syskey). Select the mode [and password]. Enjoy a more secure computer.

Sunday, April 11, 2004 5:59:06 PM UTC  #    Comments [0]  |  Trackback

# Tuesday, March 30, 2004
Why you should hash a lot

So, why do we care about multiple iterations, good salting, etc.? Isn't a simple MD5 hash enough?


Yes, that's right. Rainbow tables (almost 120GB in total), so that passwords like “!BinM,$YuSt.b7“ can be easily cracked -- If you are using LM hashes. The newer NT hashes don't have this problem yet.

That's another thing to consider when determining password strength requirements. Normally we can say “Oh, doing n steps will take at least x time, and passwords expire in x/16 time, so we're safe.“ However, if our apps are designed in a way that allows someone to precompute an attack and make a time tradeoff, our password strength versus time no longer means anything.

Update: Edited article because as far as I can tell (they won't answer my inquiries) these tables do not attack NT hashes, only the weaker LM hashes (no surprise).

Tuesday, March 30, 2004 3:42:29 AM UTC  #    Comments [0]  |  Trackback

# Thursday, March 25, 2004
Storing passwords and hashing

I see a lot of articles on hashing passwords, however many of them skip over an important part of setting up this kind of system: iterations. But first, a quick primer on hashing in general.

Hashing is a cryptographic function that takes variable-length input, and creates a constant-length output. The output is commonly called a hash, or a digest. The most common algorithms are MD5 and SHA1. MD5 creates a 128-bit hash, and SHA1 creates a 160-bit hash. There are also SHA256, SHA384, and SHA512, although 384 is pointless, since it's just the SHA512 with some data discarded. It's computationally unfeasible to find two plaintexts that have the same hash output. Hash functions are used in some common scenarios:

1: Creating a digest of a message to ensure the message was not modified (intentionally or unintentionally). Sometimes this is referred to as a checksum. eDonkey is an example that uses MD4 hashes to identify files (and as files are downloaded, they can be checked to be good by computing the hash).

2: Digital signatures, where the hash is encrypted with a the private key of an asymmetric algorithm (like RSA). This can then be decrypted by anyone with the public key, and checked against the computed digest to ensure that something with the private key did “sign” the message, and that the message contents have not changed.

3: Securely storing passwords. Since a hash is a one-way function, it's impossible to *decrypt* the hash and retain the password. Well designed systems will not store plaintext passwords (otherwise someone who reads the database could get your password and do nasty things as “you”). If you ever use a site that sends your current password back to you if you forgot it, then they most likely have a badly designed system (and you should question the rest of their security).

We're going to focus on the password issue. Attackers can figure out a password by computing the hash themselves for a suspected password, then comparing to the actual value. So, while the hash value might be 160-bits, it certainly doesn't take 2^160 steps to find the right password, since many users use weak passwords.

When hashing a password, it's common to add some random bytes to the password that are unique for the user. This is called a salt, and it ensures that for each user has a different hash, even if the password is the same -- since hash(”password”) will always return the same, but hash(”password” + “randomData”) is going to be different. This means that an attacker must compute a separate hash for each possible password, *per user*. This helps stop an attacker from trying to attack all the users at once, since each additional user requires a complete attack (since there's a salt).

However, lets say that the attacker is going after a specific user. If the user picked an easy password, say 6 alphanumeric chars, the password's strength is ~36 bits (35.7 to be more precise, 5.95 bits per char). This is assuming completely random characters are used, which is hardly ever the case. That's not that much work for a attacker, and we're considering 64-bit security (128-bit keys) to be the “required security” level.

However, suppose instead of calculating one hash per password+salt, we take the hash, and re-hash it n number of times, where n is something between 2^14 and 2^18? Well, now the number of steps required per password goes up that much. The 36-bit password now has an effective strength against brute forcing of 2^50 to 2^54. Essentially, by adding 2^18 steps to the hashing, we've added the equivalent of 3 *random* characters to their password.

So, do you need to iterate? Find out your minimum security level (48-bit? 56-bit?). Figure out how many iterations you can perform on your hardware before performance is unacceptable (probably between 2^14 and 2^18). Subtract that from your required level, and you have the minimum password entropy level.

For instance, let's say that I want to have 64-bit security from my passwords. My hardware can do 2^16 iterations without hurting logon times, thus I need 64-16= 48 bits of entropy in each password. This can be accomplished by requiring passphrases consisting of four common words (say a dictionary of 4000). (12 bits per word = 48 bits in the password + 16 for iterations, and I'm set).

Hashing is even more important when you don't have control of how good the passwords are. For instance, you're saving customer's credit card data, and the key is based off their password (so that they MUST login for your system to access that data). In these cases, requiring a complex password might not work for various reasons such as customer pushback, or risk of customer choosing something like your site name or their name as a password. It's important to determine the level of password complexity that will “push users over the edge” - the point when they stop using something remotely random, and start using things like their last name, their SSN, etc. When that point is reached, the entropy of their password is uselessly low.

Now, assuming a semi-casual attacker with a strength of 40 bits. He's got the power to do 2^40 steps of computational work. If your users use 24-bit passwords, their hashes can be broken by this attacker easily. But, with 2^18 iterations, those weak 24-bit passwords now require 2^42 steps, and the hash is saved.

So, there is really no good reason not to do multiple iterations. Even 1024 will provide some strength (equivalent to 2-3 extra characters in the password). In fact, the .NET framework already has a class that does all of this (hashing with whatever algorith, salting, and iterations) for us: System.Security.Cryptography.PasswordDeriveBytes. Use it!

Code | Security
Thursday, March 25, 2004 4:06:06 AM UTC  #    Comments [1]  |  Trackback

# Monday, March 08, 2004
Nothing is secure

One thing to keep in mind is that nothing that I know of in this world is secure.  I'm not just talking about software.  Dictionary.com defines secure as “free from danger or attack”.  Can you think of ANYTHING that meets that definition?  Leave a comment and win a prize if you can.

Security is about probabilities.  “How secure is X?” is often asked.  Does that mean if we use ultra-high encryption that it's impossible for someone to break through?  If I chose a 256-bit key right now and encrypted my data with it, is my data secure?  Remember, it's *possible* that someone could guess a 256-bit key in one shot.  The probability of that is usually extremely low, although if I picked a key of all zeros a system might try that to start off and thus win in one turn.

So, when choosing your defenses and making your tradeoffs, always consider the probabilities of a certain attack occuring.  Wasting time “bulking up” defenses in one area while ignoring weaker areas is like optimizing code that isn't slowing your system down: pointless and a waste of time.  You will never have something that's “secure”.

Code | Security
Monday, March 08, 2004 6:48:27 PM UTC  #    Comments [0]  |  Trackback

Cracking code - Part 2: Other simple attacks

In part 1, we attacked the code by stopping it at a known point, the “invalid code“ message box.  From there, we were able to trace up to where a decision was made as to the validity of our serial/code, and change that logic around.

Going through someone else's compiled x86 code can be somewhat like going through your server's logs to find some specific information.  Most people don't start with log entry 0 and read each one.  We filter the logs, look for error entries, etc.  Depending on what we do know about the events we are looking for, we can find the related entries in different ways.  The same applies when going through code.  Here are two other simple things that we could do to SimpleCode.exe to break it:

We could search for all strings, and then look for “good“ strings, something we'd expect to see when our code is valid.  OllyDbg can dump these strings and search them, and then take us to the places where they are used.  From there, we can track up and see where/why that code wasn't called.

Our input
Every program needs to take our input, then somehow validate it.  If we enter some data that's easily recognizable (like “AAAA”), we can set a breakpoint on memory access to that location.  From there we can figure out what's being done to our input, which is useful for reverse engineering -- creating a “keygen”.  Having a keygen is much more valuable, because we don't need to make binary patches and modify the executable.  Between different versions of the software, the key validation will probably remain the same.  If we know how to generate our own keys, we have a “one size fits all” attack.

Code | Security
Monday, March 08, 2004 6:01:59 PM UTC  #    Comments [0]  |  Trackback

# Tuesday, March 02, 2004
Cracking code - Part 1

Update 2004-03-07: Added screenshots.

Read the intro to find out why I'm writing this.

Alright, before we get into attacking .NET, let's see how it's done against common Win32 programs in x86.  First, you'll need a good disassembler/debugger.  I recommend OllyDbg.  It's very easy to use, and does a good analysis of the code, which helps us out quite a bit.  SoftICE is another alternative, but it's low-level, harder to use, and it costs $1000.  People tend to use this when they want to debug something like a device driver, or make a patch for Windows.

Here's the executable I wrote for this sample: SimpleCode.exe (44 KB) and if you feel like cheating, the source code: SimpleCode.cpp.txt (1.28 KB).  It's very simple.  In fact, the whole purpose is to validate the user code -- there's no real content that's protected.  However, it will be enough to learn from.  Also note that it only runs on Windows 2000 and above.  If you aren't using that OS, upgrade :), or get the code and fix it, or email me for a version you can use.

So, let's open OllyDbg and make sure the analysis options are on (Alt-O, check all of them out).  Now, load SimpleCode.exe.  OllyDbg loads and disassembles the code.  You now have a console window open, and a bunch of x86 on your screen.  Let's run through the program (F9).  Enter 4 chars for your serial, and 4 for your activation code (no checking is done, so you'll screw up the program if you enter more data).  A message box appears telling us the code is invalid:

That's our way in, for this example.  We know that somewhere before the message box was shown, our activation code was tested.  So, let's go breakpoint at the message box.  Restart SimpleCode (Ctrl-F2).  Right click in the main window and select Search for -> All intermodule calls.  In the new window, type MessageBox.  You'll see two calls to MessageBoxA.  A real program would have many more.  Right click one of the calls and select “Set breakpoint on every call to MessageBoxA”. 

Run the program and enter fake serial/activation again.  The program breaks at “00401163  |.  FF15 DC804000 CALL DWORD PTR DS:[<&USER32.MessageBoxA>>; \MessageBoxA“.  If we look up a bit, we can see that the arguments loaded are for the invalid serial.  This is the message box we want.  Go into breakpoints (Alt-B) and disable both breakpoints.  Now, the opcode right after the MessageBoxA call is C3, RETN, the end of the function.  Considering the code for this function is very short (21 lines), it should contain only the “bad” code -- code we don't want executing.  Press F8 to step over that call.  Dismiss the message box.  Notice you can press “;“ to add comments to lines.  It'd be good to mark this line with something like “Return from displaying bad message box.“, just in case we get lost later on.  In many programs, there will be many interesting points, so good commenting is key.

If you're going to be doing real attacking, you need to learn some X86.  Important things are CALL, RETN, the various jumps, and comparisons.  Because most likely, somewhere inside your target program, a check is performed and then a corresponding action is taken.  If we can reverse the logic, then we can make the program think correct data was entered when it wasn't (and the opposite: correct data will be considered incorrect).

Now we're about to return to the point that called this function.  Press F7 to see where that takes us.  Now we're on “00401274  |.  8B4C24 3C     MOV ECX,DWORD PTR SS:[ESP+3C]“.  The line above that is the callsite of the “bad display function“.  Comment it as such.  Look around.  OllyDbg should display some arrows indicating jumps and targets.  If it doesn't go into debugging options and check your settings. 

Notice that the callsite of the bad function is a jump target from “00401259  |. /74 14         JE SHORT SimpleCo.0040126F“.  If we take the jump, we end up calling the bad function.  If we don't we RETN (look at the line right above the bad callsite).  Sounds interesting.  Set a breakpoint on that JE instruction, restart, run and enter the data.

JE means “jump if equal“.  It's opposite is JNE (jump if not equal).  Our program is stopped right now at a JE, and OllyDbg says the jump will be taken.  Since the jump goes someplace bad, we don't want it to happen.  Press space.  This opens the reassembler.  Change the JE to JNE and press assemble.  OllyDbg patches the in-memory executable. 

Let's see what happens.  If we're lucky, this will call the “good“ code.  If not, we just patched something else and the program at the best is going to do something strange, but most likely will crash and burn.  Press F9.

What's that?  Thanks for activating?  Why, you're quite welcome!  That jump did it.  Wasn't that easy?  And we didn't have to learn much X86 at all.  To save your changes, we'll need to restart (OllyDbg will complain since the breakpoint code was patched and changed) and goto the breakpoint and re-patch.  This time, right click and select “Copy to executable -> All modifications”.  Now we've got a patched program.

This was extremely easy (it was a very simple program!), and just demonstrates one way that someone could attack your code.  It's also an inflexible attack (a binary patch, versus finding the algorithm), so if a new version is released, we need to debug and patch it again.  Hope you learned something!

Update 2004-3-8: Part 2 now available.

Code | Security
Tuesday, March 02, 2004 7:20:18 PM UTC  #    Comments [9]  |  Trackback

Cracking code - Introduction

To defend, you must have some idea of what you're defending, and who and what you're defending against, specifically, which attacks.  Failure do understand and know these things means that your defense will most likely not be effective, and could in fact decrease your security.  Here's an example:

Near where I live, thieves were stealing cars that people parked in the street.  The neighbourhood committee decided that they'd stop this.  The solution they implemented was to put gates at all entrances and exits of their area, and have guards that only allow cars with a particular sticker get through.  This makes people FEEL more secure.  However, for the cost (guardhouses and gates construction, guard salaries), it's not as effective as it could be.  A thief can still walk in just as easily (gates only block roads), and when driving a stolen car out, the guards will see the car and sticker, recognize it, and let them leave.  If they had thought about how thieves operated, then they would have realised this and done something more effective, perhaps hiring the same number of guards, but setting them on a patrol, instead of just sitting at their posts.  With unlimited resources, they could do both things, and give each member a special remote key-code to unlock the gate when they are driving.  However, the tradeoff in cost and convenience is too high for them.

This is how security is, in the physical and electronic worlds.  We have many possibilities, each with their tradeoffs.  Deciding which measures to implement requires us to understand how our opponent is going to operate, as well as the details of how exactly our defenses work.

In this series, I'm going to show you how to crack simple code.  I'm going to make a series of samples to try this out on (to avoid DMCA problems with real code), so as to get a feel of what crackers do to code.  It is not going to be in-depth or show how to become a master cracker.  Just enough so that we could attack a simple Windows/.NET program's licensing key system, which is a common theme in software protection.

Continue to Part 1, where we'll crack some simple code...

Code | Security
Tuesday, March 02, 2004 5:26:40 PM UTC  #    Comments [5]  |  Trackback

# Monday, March 01, 2004
Processing HTML into safe HTML with .NET - Part 3

Now that I've decided on which library to use, I'll describe the actual code.

We already know that HTML, esp. in Internet Explorer, provides many attack vectors.  And new versions of the browser could add another tag or attribute that can execute code.  So we need to use a whitelist, not a blacklist.

Next, there are many more legit users than attackers.  So when dangerous content is detected, it needs to be removed -- we can't just blow up and tell the user not to hack us.  The number of false positives could actually be rather high, since some people are going to use Word and end up with a lot of tags and who knows what else.  And finally, users could accidentally paste something that's potentially dangerous.  Yelling at them, or even telling them to fix their code isn't going to work, since they're maybe not even aware that HTML exists.

So, here's the code:
SafeHtml.cs.txt (3.28 KB).  It's very short and easy, thanks to the HtmlAgilityPack.  The processing of style tags is pretty weak (simple replacements), but should do the trick.  Enjoy!

Update 2004-Mar-04: Forgot to handle <A href=”scriptType:code...”>.  Be sure to add that if you use this code in production.

Code | Security
Monday, March 01, 2004 7:14:24 PM UTC  #    Comments [0]  |  Trackback

Processing HTML into safe HTML with .NET - Part 2

Following up from part 1, I reviewed three different libraries:

HTMLDocument is a commercial component ($249 per dev, inc. source code).  The other two are libraries written by some cool people at Microsoft and include source code.

SgmlReader is basically an XmlReader that can handle HTML.  To write, we need to use an XmlWriter, and that can mess up the HTML, and we don't want that.  SgmlReader seems like it'd be ok if all we wanted to do is determine if there's unsafe content and then return false, but that's not what we need.

However, both HtmlAgilityPack and HTMLDocument read HTML and create a DOM out of it, allowing you to modify it and write the HTML back out.  This is what we need.  I briefly looked over both libraries to see which one I want to program against.  I gave them both an equal rating to start off with, but the scales rapidly tipped in favour of one library.

HTMLDocument definately loses as far as API niceness and robustness.  Some problems:
  • Inconsistency when loading data into the HtmlDocument.  If you have a string, it needs to go in the constructor, otherwise, use an instance method.
  • Enums (both of them) are prefixed with “e”.  Why?
  • Lack of types.  There are four types total.  That's all.  No HtmlAttribute.  No HtmlElementCollection.  Nothing like that.
  • Weak-typed collections.  ArrayLists and HashTables are used as the collections, instead of strongly-typed collections.  So you must cast, and if you insert an unsupported object, then it will throw an exception  when writing the HTML.  Not very robust.
  • And the silliest thing of all: No encoding support.  Worse than that, FORCED ASCII.  If you open a file, their code opens a stream, manually passing ASCII encoding.  No BOM detection, no system default, just ASCII.  Ouch.
These things made me seriously doubt how professional a library HTMLDocument is.  Most of these things are ultra-simple to fix.  If I was forced to use this, I'd have to buy the source code just to make it right.  It seems like it's purpose is to demonstrate how not to construct a class library.

What's more is that HtmlAgilityPack doesn't have any of these flaws.  In fact, it seems like it's actually a missing piece of the base class libraries.  Superbly done.  Writing code against it was so easy and natural.  I'm extremely impressed.  Even the documentation is much more complete (it comes with a 180KB HTML Help file, compared to HTMLDocument's 36KB HTML Help file).

Hands-down-winner: HtmlAgilityPack.
Code | Security
Monday, March 01, 2004 6:54:09 PM UTC  #    Comments [0]  |  Trackback

# Sunday, February 29, 2004
Learning MSIL

I was going to write a series about learning MSIL (Microsoft Intermediate Language, or simply “IL“), and then get into more advanced topics.  However, I found a good tutorial (and no doubt there's more if I use Google for am minute) at CodeGuru, called MSIL Tutorial.  It should be enough to get people up to some speed.

I'll be writing some articles about how people actually attack programs, starting with nice x86 assembler, and then showing how attacks against .NET programs can use many of the same vectors.  I'll show how, even with some weak obfuscation (and by weak I mean pretty much every product currently available), crackers still have an easier time on .NET than on native x86/Win32.  Then I'll talk about some mitigation techniques that can be used to make things somewhat harder.

Code | Security | IL
Sunday, February 29, 2004 5:43:53 PM UTC  #    Comments [2]  |  Trackback

Processing HTML into safe HTML with .NET - Part 1

In an application I'm currently writing, we allow users to write messages with HTML markup in them, to deliver a rich experience.  The obvious problem is making this secure.  We don't want UserA to write a malicious script and steal some of UserB's data.  IE provides some cross-site scripting defense, but defense-in-depth (well, not even that deep in this case) would want to make us ensure that the HTML doesn't contain anything executable.  I've seen some samples that claim to clean the HTML with not much code at all.  They check a few tags, and they think they're done.  Of course, they aren't.

The problem is that IE is extremely powerful.  While this is great when developing an intranet application, it makes finding all the attack vectors nearly impossible.  For instance, we might think that a style attribute is ok, right?  Wrong.  There are two problems that I can think of (without thinking too hard).  First, someone could use styles to “overwrite” links on the page by using absolute positioning.  They could then change the “My Account” link into a link that goes to their own server, and steal the user's information.  Second, the style attribute can be used to load an HTML Component (.HTC).  This can contain lots of script.  That's bad.  And this is just in one little attribute!

Needless to say, there are many, many more attack vectors.  Even if we could find them all, that doesn't help users when they get a new browser with upgraded and different capabilities.  So, we're going to have to resort to a “safe” HTML subset.  We'll go though the MSDN reference and pick out the tags and attributes that we consider safe, and anything else will simply get deleted.

Sounds easy enough, except we've got to parse the HTML.  Not fun.  Fortunately, I've found two libraries that do this.  The HtmlAgilityPack, written in C# by Simon Mourier from Microsoft (source included), and DevComponents.com's HTMLDocument, a commercial but inexpensive library.  If anyone knows of other HTML parsing libraries, please leave a comment.  In part 2, I'm going to review the APIs of the different libraries.

Code | Security
Sunday, February 29, 2004 4:45:27 AM UTC  #    Comments [2]  |  Trackback

# Saturday, February 07, 2004
Any better ideas?

Everytime the TSA is criticised for their silly airport checks, like removing sandals, some bloke comes along and says “Yea, well, airplane security might not be perfect, but can you think of anything better?”

Does anyone realise how ridiculous this is?  First, if you think that the way the TSA is approaching things is correct, then, well, it's pretty pathetic to have to ask perfect strangers if they have anything better.  Since most people don't know much about security, it's like having a bad designer decorating your house, then, when criticised about the horrible design and colouring outside the lines, your only response is to say “yea, well, YOU go do something better.”

At any rate, there is something better.  The security hole exploited on 9/11 was one that allowed cockpit access.  It had nothing to do with letting people with weapons on.  It's so incredibly easy to get weapons on, that I'd be surprised if anyone with an IQ over 105 couldn't figure out how to get a .22 pistol on board.  So, the answer is to plug the security hole (a cockpit access vulnerability), and ensure that even if 10 people come on with nunchaku, .22 pistols and crowbars that they cannot gain control of the plane.

However, what the TSA is doing is similar to not patching a system, yet enacting all sorts of false security measures.  For instance, lets say a new blaster variant comes out and attacks Windows machines on ports 135 using a new, unpatchable hole.  Since that's somehow related to Windows networking, our fake security advisor says: “Ha!  We'll turn off file and print sharing.  Yea, it'll annoy everyone and make our network useless since that's what we use the network for.  But, we need to be secure!”

Then someone who hasn't been hit by a bus or other any other large, blunt object says “That doesn't solve the problem!  You can still be hacked, and that's a useless measure.  Stop annoying everyone and actually concentrate on real problems.”

Can you imagine that person being told “Well, maybe not, but at least our CEO feels better, and hey, what's your great idea?”  That's pretty much what the DHS and TSA do.  “Sure, we don't know crap.  But we'll be damned if we're actually going to take any decent suggestions.  Now there, please remove your sandals and your watch.”

To the untrained eye, glass can appear as diamond.  Thus, to the security-blind, enacting useless fanfare security measures looks to be genuine.

Saturday, February 07, 2004 1:06:33 AM UTC  #    Comments [0]  |  Trackback

# Friday, February 06, 2004
U.S. government's security cluelessness summarized

I was going to write about the absurdity of www.dhs.gov.  But, that's pretty much been covered many times and I doubt I'd say anything new.  However, while browsing the site, I found a link to www.safteyact.gov which helps companies that make “anti-terrorism” products.  Of course, it's so broad that it could apply to almost anything if you have a shred of creativity.  Hmm, maybe I'll submit our Obfuscator and see if that qualifies.

At any rate, the interesting thing on www.safteyact.gov is that you are immediately redirected to use HTTPS (After some text saying “You are about to be redirected to a secure site”).

Now, why do you suppose they do that?  The site just sends down rather public information.  Anyone can go get it.  There's no sensitive data in transit.  My theory is that some... special... person thought that since the site is remotely related to actual security, why, by golly, they should be using SSL!  Otherwise hackers can get in.  Or terrorists.  Or something like that.

Sounds like the DHS (and its vile child, the TSA) so far.  But then, what's this?  SSL errors.  Revocation list not available.  Ok.  And then we get the nice message that this site's SSL certificate was signed by “DHS Test CA1”.  Yep, that's right ladies and gentlemen, they pulled a cert out of their hats.

This pretty much summarizes U.S. government security.  “We're clueless, but we're gonna do *something*.  That something doesn't have to make any sense, or even be implemented correctly.”

Yes, I know there are some smart people working in the U.S. government.  (At least one is an MVP!)  And the site actually loads, so someone, somewhere, even if it's a subcontractor, has enough sense to figure out how to press a power button and save files.  My guess is that whoever made this site wasn't a moron, but had a conversation like this:

“Hey, web designer, we've got a security exploit.“

“I'm not a web designer.  I'm a server admin. And what exploit are you talking about?“

“Whatever, you work on the Internet.  Our site isn't secure.“

“Yes it is, we've got a firewalls configured correctly, patches, monitoring, and the passwords are managed--“

“But I don't see a lock thingy in the Internet!“

“Right, the lock icon won't appear in your browser since we don't use SSL,  Secure Sockets Layer.  We don't need to because we're not transmitting sensitive information.“

“I don't care!  I wanna see a lock icon thingy 'cause that means our site is secure, right?”

“Well actually, it means that data in transit is encrypted and--”

“Exactly!  Encryption means it's secure.  You should know this.  So, when will we have the lock icon thingy?“

... Can you stand up sir?  I need to get a certificate.”

Friday, February 06, 2004 3:42:13 PM UTC  #    Comments [1]  |  Trackback

# Tuesday, January 13, 2004
Base32 in .NET
I haven't seen any .NET Base32 implementations, but various people have expressed interest in having some simpler way to represent binary data (such as an encrypted keycode).  So, I'm posting a sample Base32 encoding.  Note that this does not conform to the Base32 standard encoding, but uses it's own set of characters (useful for keycodes, where we don't want to have to differentiate between 0 and O.  Thanks to Juan Gabriel for making the code much better :).

Update 2004-2-5: Thanks to Philippe Cheng for fixing a bug that caused extra (harmless) output. (See comments for details).

using System;
using System.Text;
public sealed class Base32 {
      // the valid chars for the encoding
      private static string ValidChars = "QAZ2WSX3" + "EDC4RFV5" + "TGB6YHN7" + "UJM8K9LP";
      /// <summary>
      /// Converts an array of bytes to a Base32-k string.
      /// </summary>
      public static string ToBase32String(byte[] bytes) {
            StringBuilder sb = new StringBuilder();         // holds the base32 chars
            byte index;
            int hi = 5;
            int currentByte = 0;
            while (currentByte < bytes.Length) {
                  // do we need to use the next byte?
                  if (hi > 8) {
                        // get the last piece from the current byte, shift it to the right
                        // and increment the byte counter
                        index = (byte)(bytes[currentByte++] >> (hi - 5));
                        if (currentByte != bytes.Length) {
                              // if we are not at the end, get the first piece from
                              // the next byte, clear it and shift it to the left
                              index = (byte)(((byte)(bytes[currentByte] << (16 - hi)) >> 3) | index);
                        hi -= 3;
                  } else if(hi == 8) {
                        index = (byte)(bytes[currentByte++] >> 3);
                        hi -= 3;
                  } else {
                        // simply get the stuff from the current byte
                        index = (byte)((byte)(bytes[currentByte] << (8 - hi)) >> 3);
                        hi += 5;
            return sb.ToString();
      /// <summary>
      /// Converts a Base32-k string into an array of bytes.
      /// </summary>
      /// <exception cref="System.ArgumentException">
      /// Input string <paramref name="s">s</paramref> contains invalid Base32-k characters.
      /// </exception>
      public static byte[] FromBase32String(string str) {
            int numBytes = str.Length * 5 / 8;
            byte[] bytes = new Byte[numBytes];
            // all UPPERCASE chars
            str = str.ToUpper();
            int bit_buffer;
            int currentCharIndex;
            int bits_in_buffer;
            if (str.Length < 3) {
                  bytes[0] = (byte)(ValidChars.IndexOf(str[0]) | ValidChars.IndexOf(str[1]) << 5);
                  return bytes;
            bit_buffer = (ValidChars.IndexOf(str[0]) | ValidChars.IndexOf(str[1]) << 5);
            bits_in_buffer = 10;
            currentCharIndex = 2;
            for (int i = 0; i < bytes.Length; i++) {
                  bytes[i] = (byte)bit_buffer;
                  bit_buffer >>= 8;
                  bits_in_buffer -= 8;
                  while (bits_in_buffer < 8 && currentCharIndex < str.Length) {
                        bit_buffer |= ValidChars.IndexOf(str[currentCharIndex++]) << bits_in_buffer;
                        bits_in_buffer += 5;
            return bytes;

Code | Security
Tuesday, January 13, 2004 1:22:46 PM UTC  #    Comments [4]  |  Trackback

# Thursday, January 08, 2004
TSA: Bathroom lines dangerous


“passengers are asked not to congregate near the planes' toilets“

TSA spokesperson: "We frequently say security is not a spectator sport... we can't be successful about stopping terrorism without everyone playing a role“

The point he misses is that the role of the passengers shouldn't be to prevent terrorism.  The plane itself should handle that.  If having passengers “congregate” near toilets is a real threat, then the plane has much more serious security problems that need to be dealt with.  Where did they find these people who come up with this nonsense?  TSA == Totally Senile Administration?

Thursday, January 08, 2004 4:29:59 PM UTC  #    Comments [0]  |  Trackback

# Tuesday, December 30, 2003
Secure it? Nah, let's just make it illegal.

Stoplights can be changed to green via an infrared signal.  This is so that police and ambulances can get through traffic faster.  Some people are using it to their own benefit.  So, the US government is trying to make these devices illegal if not authorized.  Hmm, sounds like cell phone companies when responding to the thread of eavesdroppers.  Why bother adding any security measures when we can just make it illegal?  I mean, no one ever breaks the law, right?


Tuesday, December 30, 2003 3:41:18 AM UTC  #    Comments [2]  |  Trackback