Back to
[Teams] [Top]
Team bindshell-dot-nl
Resources
Active Members
| 4 (plus 3 part-time)
|
Nicks
| epixoip, rebeccablack / L10n, undeath, kalgecin; also Bitweasil, nethic, Falconor
|
Countries
| US (4), DE (1), TZ(1), CA (1)
|
Software
| John the Ripper, Hashcat, oclHashcat*, PACK, maskprocessor, AccessData PRTK,
AccessData FTK, kalgecin's custom rainbow tables, Cryptohaze Multiforcer,
Cryptohaze GRT, Rebecca Black Soundboard for iPad
|
Hardware
|
Dedicated: 55 CPU cores, 10 GPU cores; part-time another 4 CPU cores and 9 GPU cores
( see table below)
|
What We Thought
For those that remember us, we were known as Rippin and Tearin in last
year's competition. Based on our extremely lackluster performance last
year, we decided to bring our "A game" this year and attempt to salvage
our reputation.
Overall I think we thought the contest was a little bit easier this year
than last year, but that might be because we were far better prepared
and more experienced this year. I really liked the idea of the
challenges, and the weighting of each hash type appeared to be fair for
the most part. I think not getting at least one or two more of the
challenges and not starting on the mscash2 and blowfish hashes until the
contest was almost half over is what prevented us from being a little
more competitive.
The competition was quite tiring; none of us slept for more than a
couple hours during the 48-hour window. And although we were all
extremely exhausted by the end and were relieved when it was over, we
really did have a blast and we'll absolutely do it again next year.
Approach
Our team was comprised of four full-time crackers, two part-time
crackers, and one part-time contributor who cracked no hashes, but
rather decided to take a break from researching millimeter-wave sensor
technology and put his PhD to good use by analyzing plaintexts for us
and assisting with rule writing. Communication was largely by IRC,
although three of us were in Vegas, so hotel room meetings were abundant.
Before the contest began, I created a small web portal for team members
to submit their cracked plaintexts and view all of the plaintexts our
team had cracked as a whole. I also created an email distro list for all
of the team members. Once we received the email with the hashlist
download URL, and once the technical difficulties with
contest.korelogic.com were resolved, I passed the URL to the team and it
essentially became a free-for-all.
The .zip passwords were trivial, we guessed those right away without
even having to crack them. But the challenges we found to be more
difficult, so we queued all of the challenge files up in PRTK, and
worked on them as they were unlocked. The challenges we were able to
complete were super simple, it was like if you could get into the
archive/.pdf/.doc then you were essentially handed a thousand or more
points. Initially rebeccablack and undeath were handling the challenges,
then once they couldn't crack any more I took care of the core dumps.
That .dmg was impossible. We tried using FTK to crack it, but that was a
no-go. rebeccablack ended up writing a shell script on his Macbook to
loop through a dictionary and attempt to mount it, but it was only
cracking at about 1 c/s.
Bitweasil's initial plan was to brute force the raw hashes with
Cryptohaze Multiforcer, utilizing all of our Nvidia cards in one large
distributed pool. He also worked through the night to implement
md5_gen(22) and md5_gen(23) in Multiforcer as well. However, due to some
segfaults and other bugs, we never actually got to make that happen.
Which was really too bad, because we likely would have been able to
dominate the gen(22) and gen(23) hashes if we had gotten that to work.
As it turned out however, we ended up starting on the gen(22) and
gen(23) hashes way later than planned.
My initial plan was to hit each of the hashtypes with --single mode in
JtR, while simultaneously hitting the raw hash types on the 5870s with
massive dictionaries and mangling rules in an attempt to obtain a large
number of plaintexts for analysis. I was able to make an initial run
with a 7GB dictionary and about 41k rules against raw md5 and ntlm in
just under 90 minutes. Oddly, a very large percentage of the numeric,
date format, and multiple word phrases were found directly in my
dictionary without any mangling. I was able to find 60% of the raw md5
hashes and 50% of the ntlm hashes on the first pass.
Those plaintexts, along with our massive dictionaries, were then
immediately run through JtR with --rules, which gleaned several thousand
more plaintexts. Plaintexts were continually passed in as wordlists
with, mutating the new plains again and again with --rules until
thoroughly exhausted. This is basically how we found the vast majority
of the plains we cracked. Undeath was the only one using Hashcat, and I
know he took a similar approach with that tool as well.
The plaintexts were also used to generate .chr files for JtR, and also
gave us a real decent sample to analyze for writing hashcat and JtR
rules based on identified patterns. The majority of the remaining
plains were found through writing new rules based on new patterns and
running all of our plains and dictionaries through those new rules. The
rest were found through incremental mode in JtR with our custom .chrs,
and a small percentage through mask attacks with oclHashcat or
maskprocessor piped into JtR. And those plains of course were run back
through as a dictionary and mangled with --rules for each hash type,
again and again.
An ascii diagram representing our approach:
+<-------------- new rules
| ^
v |
+---> dictionary ---+ |
hashes --->+ +----> new plains ----> PACK
+---> bruteforce ---+ | |
^ | |
| v v
| jtr .chr maskprocessor
| | |
| v v
+<--------------------+<------------+
Observations
I think we did extremely well for a small ad-hoc team, especially since
we were able to keep pace with the vendors. We certainly did a lot
better than last year.
We liked that the hashes were already split up by hashtype and labeled
appropriately this year. Since there was a larger variety of hashtypes
this year, that definitely took the ambiguity out of things and made it
a lot easier for us to just start cracking.
I constantly felt like we weren't using our time efficiently. That might
be because we were using a lot of hardware with only a handful of
people, and trying to keep all of the cores busy and keeping everything
straight was a bit of a chore. I personally had 18 CPU cores and 7 GPU
cores to keep busy, and I was having a hell of a time ensuring something
was always running, what attacks I had already run, etc. Looking back I
think there were a few hashtypes we never ran dictionaries through, and
there were a a couple other we never brute forced. So I think next year
I might have a matrix / checklist that we check off what we've done on
each hash type and such, just to help stay better organized and ensure
we don't have any gaps.
The hotel wifi wasn't making things any easier, either. There was one
point where I was in the middle of starting a few more runs after
several threads had stopped, and the hotel wifi suddenly stopped working
for a couple hours right in the middle of that, so my 990 X was largely
idle during that period. Not a good feeling. Things got even worse when
I decided to tether the next time the wifi went down, because T-Mobile
throttles your bandwidth after you exceed 2G of transfer. Tethering
quickly became worse than the crappy hotel wifi.
We made a decision as a team early on to not work on any of the super
slow hashes, which was a huge mistake. Once we saw how much the harder
hashes were worth and how many other teams were actually cracking, we
decided to give it a shot about half-way through the contest. Basically
what I did was create a .chr file based on all of the len 6 - 7 lower
alpha, numeric, and date format plains, and ran though them in
incremental mode with that .chr. Once I got over 100 plaintexts from
each, I generated a second .chr based on those plains and launched two
more JtR instances with those .chrs. We had five cores working on
mscash2 at a combined speed of ~ 1800 c/s, and four cores working on
blowfish at a combined speed of ~ 3000 c/s, so had we started 20 hours
earlier we probably would have found a decent amount more.
Another thing we did that was definitely suboptimal was only tracking
the plaintexts, not the hashes that were left to crack, which resulted
in a lot of duplicated effort (especially for the salted hash types.)
For next year's contest we're going to have to come up with a way to
track remaining hashes that doesn't consume a ton of admin time.
Bitweasil from Cryptohaze appreciated the chance test his tools in a
real world challenge, and has several ideas for ways to begin refining
and further developing them. His tools will definitely pose more of a
threat next year.
I found that I really didn't use my GPUs as much as I had intended to.
Aside from the initial run against raw md5 and ntlm, and then another
run performing a mask attack against raw sha1 and a couple DES hashes
from the challenges, my primary tool throughout the majority of the
competition was John the Ripper. I don't think anyone else on the team
really used their GPUs either. Had Bitweasil gotten Multiforcer stable
and running, then yeah, we would have had a slew of Nvidia cards working
on the gen(22) and gen(23) hashes non-stop. But as it was, our GPUs were
largely idle. And since Multiforcer only supports CUDA, even if it were
up and running, our most powerful GPUs were AMD, so they still would
have been idle regardless.
Conclusion and Thanks
It was an exhausting week, but we really had a blast. Overall I think
this year's contest was a lot better than last year's. We really liked
the idea of doing the challenges and weighting the point values for each
hashtype. We also enjoyed the large variety of hashtypes this year, and
although we came fully prepared for them, we appreciated that there were
no LM hashes.
I'd like to thank each of my team members, both active and part-time,
for their dedication and hard work. You guys were fantastic!
We'd also like to thank Solar Designer and atom for John the Ripper and
*Hashcat respectively, as they were our primary tools throughout the
competition. We'd also like to thank Bitweasil for his willingness to
sit in our hotel room for hours upon end trying to resolve the issues
with Multiforcer. Even though that didn't pan out the way we had hoped,
we really appreciate your time and effort.
And of course we'd like thank Hank, Minga, and the rest of the KoreLogic
bunch for hosting this contest and making it fun, relevant, and fantastic.
See you next year!
Jeremi, on behalf of Team bindshell.nl
Hardware Resources
[Back]
Full-Time Systems/Resources | CPU Cores | GPU Cores
|
AMD Phenom II X6 1090T, 4x Radeon HD 5870 | 6 | 4
|
Intel Core i7 990 Extreme, 3x GTX 460 | 12 | 3
|
Intel Core i7 2600K, Radeon 5850 | 8 | 1
|
AMD Phenom II X4 965, GTX 460 | 4 | 1
|
Intel Core i7 960 | 8 | |
Intel Core i7 920 | 8 | |
Intel Core i5 750 | 4 | |
AMD Athlon64 X2 5000+, Geforce 8800GTS | 2 | 1
|
Intel Atom N455 | 2 | |
Intel Celeron M 420 | 1 | |
Part-time Systems/Resources | CPU Cores | GPU Cores
|
server1, GTX 580, 3x GTX 470 | | 5
|
server2, 2x GTX 295, GTX 260 | | 3
|
Intel Xeon | 2 | |
Intel Core i3 550, GTX 460 | 2 | 1
|