Free Rainbow Tables | Forum

Home of the Distributed Generator and Cracker
It is currently 24 Apr 2014, 05:14

All times are UTC + 1 hour [ DST ]




Post new topic Reply to topic  [ 7 posts ] 
Author Message
PostPosted: 06 Sep 2011, 02:59 
Offline
Shoulder Surfer

Joined: 06 Sep 2011, 02:52
Posts: 4
I'm wondering if it would be to difficult to create a simple web interface to the tables so people don't have to download all the tables themselves. Then a user can go to the page input a hash and the web service will just return the cracked hash or if it isn't cracked yet, return some sort of message.


Top
 Profile  
 
 Post subject:
Posted: 06 Sep 2011, 04:27 


Top
  
 
PostPosted: 06 Sep 2011, 04:27 
Offline
Rainbow Table

Joined: 22 Sep 2010, 18:54
Posts: 249
Location: United States
This used to be the case, but due to the hassle was discontinued. It became very i/o and computationally expensive, especially when new tables come out all existing hashes for that algorithm would be run through the table.
topic1192.html

There's always the public hash cracker.
Other users will periodically download the uncracked list and run it through *their* tables or bruteforce it, or run it through wordlists, or some combination of both with rules. You can always use the uncracked hashes forum to request that a hash be cracked.


Top
 Profile  
 
PostPosted: 06 Sep 2011, 04:58 
Offline
Shoulder Surfer

Joined: 06 Sep 2011, 02:52
Posts: 4
After reading through topic1192.html it would seem to me that we have come a long way in distributed databases. There are a lot of calculations on how long it would take to load a 1tb file and search for a hash etc, but with some of the new database for example say we load a table into Voldemort (http://project-voldemort.com/) it would make the lookup quite painless. Or even using mongodb which does sharding, partitioning, fault tolerance, redundancy, and a whole bunch of other stuff automagically could make those lookup times extremely small if distributed across enough machines.

Ideally implementing a distributed file system into this project would be the best way to go. All the users connected store a part of the data, of course with redundancy and replication you could minimize the loss of data. Then users could submit to the site whats the cracked version of this hash and a task could be sent out to everyone saying "who has this hash". Users who have it would simply return it. Of course the amount of data needed to be stored by a single user would be somewhat significant. Using the magic number 7 to propose that at any time 7 machines should have the same data then if there is 2731gb of data currently and only 1654 online machines each user would need to store at least 11.56gb of data: (2731 gb * 7 replications) / 1654 machines = 11.558 gb... Of course people who already have entire sets of data could enable a setting that offers to store a larger set of the table, thus reducing the minimum load on other users.


Top
 Profile  
 
PostPosted: 06 Sep 2011, 05:17 
Offline
Total Hash Enlightenment

Joined: 15 Jul 2009, 22:38
Posts: 1486
Location: Dallas, TX, USA
Let me know when you get all our torrents to have at least 7 seeders and get back to me when that happens. hint: it probably never will.


Top
 Profile  
 
PostPosted: 06 Sep 2011, 05:28 
Offline
Shoulder Surfer

Joined: 06 Sep 2011, 02:52
Posts: 4
Well that's the beauty of it. You aren't seeding entire chunks of the tables constantly. It could be seamless into the current project. Most people wouldn't even notice since the default settings are to store 10gb of data. The tasks that would be handed out could be as simple as do you have this hash stored? The problem with torrenting is that the seeders have to keep sending out all this data to anyone who wants the entire table. In the scenario I proposed users would only be uploading data if they have that cracked hash and if they do its what a few kb at most? The chances of you having to reply to a task with an answer would be 7 in 1654.

All i'm saying is why use another system for storage and another one for distribution when you have all these users and you could build it right into BOINC.


Top
 Profile  
 
PostPosted: 06 Sep 2011, 05:36 
Offline
Shoulder Surfer

Joined: 06 Sep 2011, 02:52
Posts: 4
A two second google gave me this: https://boinc.berkeley.edu/trac/wiki/VolunteerStorage.


Top
 Profile  
 
PostPosted: 06 Sep 2011, 05:42 
Offline
Total Hash Enlightenment

Joined: 15 Jul 2009, 22:38
Posts: 1486
Location: Dallas, TX, USA
Erm, I guess I should make my jokes less subtle.

First, generating rainbow tables is far different legally and ethically from distributing password cracking. The latter is what many believe this project does and why they do not crunch here.

Second, in the standard attack model first pre-computation has to be done. This is essentially a step requiring the entire last file of the given table and splitting up this work between boinc nodes makes even rcracki_mt OpenMP implementations look easy in comparison. This is the most computationally expensive phase of the online attack.

Third, once pre-computation has been completed then the results of it need to be distributed to all nodes engaged in the attack. Then, the remainder of the online phase are highly parallel and bounded by disk I/O for the most part.

Is doing all this possible? Sure. Is it anywhere on my todo list for this year? No. Is it likely to be on my todo list next year? I doubt it.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 7 posts ] 

All times are UTC + 1 hour [ DST ]


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group