What Was New Jan. - Mar. 17 2007
- What Was New: Jan. 2001 - Nov. 2003
- What Was New: July - Dec. 2000
- What Was New: April - June 2000
March 14, 2007: I've calculated the number of possible
10, 11, and 12 character Words Only passwords, as well as the matching
cracking times (assuming the attacker knows such passwords are in use
and uses a custom programmed attack). These have been added to the
new Pattern Samples page.
Calculating the longer passwords requires that the calculation be
fully automated. When I've written the necessary program, I'll post
March 13, 2007: The
Password Generator has had a major
facelift. Passwords diplay centered and much larger. Most of the sample
patterns have been removed, and those that are left are rearranged into
a more readable format, each with a brief description. All the removed
pattern samples, plus several new ones have been moved to an entirely new
Pattern Samples page. The patterns
have been placed into five groups, each of which has a sometimes substantial
introduction. Each sample pattern has a usually brief description, that
often includes the non default settings and what they do. While the original
Instruction page remains largely as it was, its role is that of a technical
reference. The new Pattern Samples page is more of a tutorial about
the kinds of passwords created by the password generator and how to
control the options to get different types of passwords. The whole
page has an introduction titled General Considerations. In a very
condensed manner this covers some of the key topics discussed in the
Good and Bad Passwords section.
The issues regarding random passwords versus structured
passwords are addressed.
March 2, 2007: The Password Generator has a totally new
Words Only option. Actually option is an understatement. Words Only is
a completely new password generator, with totally separate logic for creating
passwords, that shares the user interface of the older pattern based password
generator. Words Only creates passwords from a list of two to five character
words and names. Once combined you'll be surprised how difficult it is to find
the original words. Words Only will not allow any password shorter than 10
characters and defaults to 11 to 13 characters. Shortly I will write a new
page that explains the logic behind Words Only, which seems to run counter to
nearly everything I've said about passwords. Every password generated by the
Words Only option will fail the Password Evaluator with it's default options.
I suggest setting the dictionary word length range to 7 to 10 rather than the
default 3 to 7. It may also be useful to raise the "Maximum Sequence Characters,"
the "Maximum Repeat Characters," and the "Maximum 2 Character Pattern Repeats,"
each from 2 to 3 as needed. The defaults are aggressive settings, and while they
may matter in a short password, they will have little impact in passwords
10 characters and longer. The Password Generator
have a fair discussion of Words Only and it's options and capabilities.
March 2, 2007: The Table of Times to Crack Passwords has been completely updated to
account for the change in computer speeds since this was originally written
in 2001. It has also been extended to 14 character passwords to show what
can be done with all lower case passwords.
February 23, 2007: For the past several weeks I've been actively
working with the site for the first time in almost 5 years. There are no big
changes yet, but I'm working on some ideas that may be interesting,
if I can get past some very bothersome issues related to how a small number of
users are misusing and abusing the site.
Three changes in order of importance are 1) actions taken
to limit bot or automated access, a close 2) changes to the
to the Password Generator. This last is the only one that
I consider a positive development.
Web Bots: In January I had two
days with approximately 10 times my normal
traffic volume and I started to take steps to limit the heavy bot activity. When
I got my latest bill in early Feb. my hosting costs just about doubled. I make
no money form this site and I won't pay for people to play with electronic
toys that grab web pages that are not being read. From the first day I made
that only public search engines that respect robots.txt and restrict their retrieval
rates are allowed to use any programmed or automated access to retrieve GeodSoft web
pages. This isn't the first time I've taken steps to limit access but it's
the first time in several years, and by far the largest and most aggressive.
Users must understand that this does not only apply to wget
or any other dedicated tools for retrieving pages automatically. It also applies to
browser add ons such as Firefox's Fasterfox. The idea of grabbing all the URLs
on a page while you read a page, so the next page you go to is already in your
browser is obscene. This is the biggest offense to the Internet since spam. It's
pure selfishness. My pages average over 40 links per page. No matter what path
you take you can never read but a fraction of the pages that are loaded. I'm
paying out of my pocket for each of these pages.
If all web browsers were to adopt this strategy, it would
increase the browsing load by ten or so times and totally alter network dynamics.
Everybody's browsing would be noticeably slower. Email and web browsing account for the
largest share of Internet traffic. If you increase one of these by an order of
magnitude, everyone on the Internet will feel the effects. For those who have
been around for a while, remember how the Internet used to bog down. If prefetch
becomes a common browser strategy, that is what we will see again.
The net is fast today because of the abundance of fiber optic cable installed in
the late 1990's, but a massive increase in browsing loads could quickly alter this.
Anyone who uses Fasterfox or a similar prefetch product on Geodsoft can expect to
permanently loose access to GeodSoft in a few days, at most.
I have developed a series of scripts that slice and dice each
day's logfile into highly selected views, with all the activity by one IP segregated
into two views that tell me everything that matters. The highest levels of activity
are looked at first and I can go as deep as I care to. The first view normally allows
my to make a preliminary, but usually correct, analysis in about two seconds.
It makes no difference whether 10 or 200 pages have been accessed. It does not
matter what the user agent is, or whether or not robots.txt was accessed. It doesn't
matter whether 10 pages are accessed in three seconds or 10 hours. If you summarize
and display the right information in the correct order, automated access just does
not look like human browsing. Finding the bots is trivial. The time consuming
part is identifying the hundreds of legitimate search engines and separating
those from individual users not authorized to use automated access, and making
the borderline calls.
If I find an individual user accessing even less than 10
pages via an automated tool, I will block access to the site one way or another.
Any one who wants to, can see what the blocked
page looks like. I have cut my overall traffic levels dramatically from two or
three weeks ago. I try to block with robots.txt or by user agent first,
if that appears practical,
but have no hesitation to block by IP address. I don't typically block a specific
IP address but normally a range that I think will be large enough that even with
a new address from DHCP, the offender likely will not have access. That means
some innocent parties may be blocked. If you find yourself blocked and you
know you are innocent, follow the instructions on the blocked page and I'll
see what I can do. I've blocked some individuals, as well as some small companies,
and two medium size European ISPs.
there was nothing I could do to stop people from copying my content. I'd seen
several FAQs and other documents in multiple locations with credits to the
author. I thought that if I switched to open content, and someone using my
content could be completely legal by simply including my copyright and linking
back to my site, this would be an obvious choice, rather than risk any
copyright infringement actions. I was wrong.
I don't actively search for copies of my work
but recently, by accident, I stumbled across a copy of one of my
pages. The page had virtually all the unique contents of one of my
pages, without any suggestion that it came from somewhere else.
Since I found and acted on this one first, the hosting service has
already blocked access to this page.
Subsequent searches showed this page to have been copied thirtysome
times. I've only looked at about six so far and of those only one made even a
minimal effort to comply with my license.
If you've copied one or more of my pages, and not paid
careful attention to the terms of the GeodSoft Publication License, do us both a
favor and take it/them down. I'm not in a forgiving mood. Since Version 1.3, the
GeodSoft Publication License has included section "V. VIOLATIONS Any person
or organization that reproduces or distributes GeodSoft content in violation
the terms of this license forfeits all future rights to display GeodSoft
content under the terms of this license." and that expresses my sentiments
There are many changes between 1.3 and 1.5.01; 1.4 through 1.5
only lasted four days. GeodSoft.com is now covered by a simple and fairly common
form of "shrinkwrap" license. By using the site you agree to its
terms. This adds contract law to copyright law. Even if a copy you made may
a good chance you have no rights to it based on contract violation. I've
explicitly protected "fair use" in the License. People can still take limited
sections, and criticize me or what I say. You still have to acknowledge the
source as it makes no sense to criticize an unknown author. Read the terms carefully if
you are going to use anything from the site. I do have some very specific
linking requirements in the expanded Fair Use section. Copyright law does not provide
for anything like this; I believe contract law does, but it hasn't been
tested in court yet, that I know of.
The long copyright notice and so called "incorporation
by reference" paragraph has two changes. The following "These terms are subject
to change. Distribution is subject to the then current terms," was in 1.3.
"Then" before "current terms," was removed because it was potentially
ambiguous. I thought it was obvious that it applied to "then current", i.e.,
today's or the newly modified terms, but realized there was an alternative
read, though somewhat far fetched. The new the statement is simpler and
Immediately following this, "or at the choice of the
distributor, those defined in a verifiably dated printout or electronic
copy of http://GeodSoft.com/terms.htm at the time of the distribution."
has been replaced by "or at the choice of the distributor, those in an
earlier, digitally signed electronic copy of http://GeodSoft.com/terms.htm
(or terms.pl) from the time of the distribution." There has always been
the problem of how you verifiably date a printout or electronic copy.
For printouts it's hard to think of a way short of having it notarized
that works reliably. For an electronic document, generally there is no way
to stop the altering of time stamps or changing of a computers time.
There is one way that comes close. A digital signature,
except that the time on the signing computer can easily be altered, at the
time the digital signature is created. I have created
a second part to insure the digitally signing computer uses the correct
time. Now the terms themselves have a time stamp. Currently the
terms are a Perl script. Later when I have time, I'll make them an .shtml
document. It's not a simply formated date and time; its a coded 20
digit integer. A computer savvy person with some time can probably
figure it out, but should not try. Changing the Terms time stamp voids
the terms and any right to ever use any GeodSoft content.
The Term's time stamp
and digital signature time stamp must be within five minutes of each
other. This should not be a problem as signing a document tales about
30 seconds, mostly depending on how long it takes you to enter your
password or passphrase. The GnuPG comand is:
and you will be prompted for your passphrase. You can pretype the command
and practice with no file, and you'll just get an error message that the
file does not exist. Refresh or reload the Terms page just before saving, and
you should be able to do the whole process in 45 seconds.
gpg will create a new file named:
This contains the original file, three separator lines
and at the bottom, between the last two separator lines, a few lines of
random looking characters. This is the signature. This file must never be altered in
any way, which is another reason for putting it on a write once CD or
DVD. If a single byte is ever changed in the signed file or signature,
the file will not verify. Having a time
synchronized computer insures that the time stamps will be consistent.
If the computer is not time synchronized, then you should check your
computer's time against a reliable source and set it to the correct
time. If your computer is not time synchronized, and you don't check
it could easily be off by five minutes.
If there is ever any question about
any pages you might be using, email a copy of the signed Terms, as an
attachment, with your UTC offset, and I can verify you have valid terms,
and will know the date range they apply to. Include a list of pages you that
you are displaying or otherwise distributing, and the dates you downloaded
or saved these pages. Include URLs where I can find the material, if I do
not already know, and I can verify that any pages you have
were from the period covered by your signed Terms. Legally the Terms apply to
materials not distributed on the Internet, but unless someone reports illicit
use, or by some freak chance, I stumble on an improper hardcopy, I'm not
likely to know of such use. I'll deal with such a situation, if and when it
If you have multiple pages from
periods, that span one or more GeodSoft Publication License version changes,
you will need a digitally signed copy of each license version. If you used or put
on a web site files from different times, but all within a single period
covered by the same version of the GeodSoft Publication License, then you
need only one signed copy of the terms. Version 1.5.02 will be a different
version than 1.5.01, even if the important or directly relevant terms have
not changed. After a few more edits the Terms should become stable again.
Version 1.3 was in effect more than three years. Now is not a good time
to use Geodsoft content as I've been editing the terms almost daily.
If there is any difference
Usually I keep the files
only ftp access, after I run the script to change the copyright notice,
which is easy, I have to upload every file. Even doing a directory at a
time it's tedious. When I managed my own servers
I just made a tar archive and extracted the new files over the old on the
server sysems. I won't update the content files copyright notices
until I'm confident I have stable Terms.
Digital signing is simple if you use PGP or GnuPG, but not if you
don't, and most people don't. The new terms mean, if you want assured
continued access to specific web pages and content, you cannot make second or
later generation copies, that is you cannot copy from anyone except GeodSoft
because GeodSoft is the only source for a properly time stamped copy of the
The new terms require that the "distributor", typically a webmaster or web site owner, own
Basically the same issue
exists with pages copied between Oct. 2001 and Jan. 2007. All the GeodSoft
Publication Licenses from 1.00 through 1.3 stated "a verifiably dated printout
or electronic copy." Clearly I would accept a notarized printout. I believe
I'd accept, except when something on their web site or in any communications
that had transpired has made me suspicious or their honesty, a sworn
statements similar to the ones required in the Digital Millennium Copyright
Act infringement notices. I'd basically be inclined to trust those who appeared to
have made an honest effort to comply with the Terms and to distrust those who
took material with no acknowledgment of any kind. These latter, of course
have clearly violated virtually all the terms, and the only question is how
to most quickly have the material removed or made inaccessible.
In either a snail
mail letter or a digitally signed email (I don't have fax) "I swear, under
penalty of perjury, that the attached photocopies of the GeodSoft terms.htm web page
printouts display the correct system date (and time) when the original
printouts were made, that this was the actual date (and time with no
more than a 5 minute margin of error), that the system time on
the computer from which original printouts were made has never been altered
to create to create false time stamps or invalid dates, and that neither
the printouts nor any photo copies have ever been tampered with to change
the appearance of the date (and time) displayed. I participated in or superviced
the just described activies and know them to be true from firsthand knowledge."
The precise wording may vary provided the material facts sworn to are
the same. Suitably altered wording
could be used with a hand written date on printouts, or a date stamp.
For someone who had an electronic copy of the terms,
and sent me a copy on a CD with a letter, or a tar file (or other format
which preserved file dates and attributes, and which I could easily
access) as an attachment to a digitally signed email with the following
statement: "I swear, under penalty of perjury, that the attached terms.htm
file has the creation and or modification date and time identical to that which it had
when I first downloaded or saved it. The computer's time was accurate to within
a few minutes of the real time and has never had its system date or time altered
to create false time stamps or invalid dates. When it has been necessary
to move the file on a system or to a different system, any move, copy, backup,
or similar utility was set to preserve the original creation and or modification
date and time and
no utility has ever been used to alter or manipulate any creation and or
modification date or time. I participated in or superviced the just described activies
and know them to be true from firsthand knowledge." The precise wording may vary
provided the material facts sworn to are the same.
While I could accept such sworn firsthand accounts to be
sufficient to establish "a verifiably dated printout or electronic copy" I could
never accept second or third hand accounts such as "I received this printout (or
file) from John Smith who swore to me that it had an accurate date." So regardless
of whether it's "verifiably dated" or "digitally signed" only persons directly
web page can be assured of indefinite access to the GeodSoft files or
content they use.
Others can copy from non GeodSoft copies, but
they will always be bound by the current License terms, and the specific
copyright notices on each page. The practical
implication is that, I can change the copyright notice at the bottom
of any specific page, from the long version to a standard
exclusive rights copyright notice, and it is no longer open content.
The few who have abided by all the terms can continue to use such GeodSoft
content but others may have to remove content that was previously
"Open content" is really not the right description for the GeodSoft
Publication License; something along the lines of "limited use"
or "defined use" might be more appropriate.
I have no general plans to apply a standard copyright
notice to GeodSoft content, but I have done it to the page I've found so
frequently copied. Anyone who has followed all the
terms as they existed at the time they copied a page has nothing to worry
about. My first concern is getting credit for work I've done, and second
to make it easy for people to get to this site. I don't really care
about aggregate traffic figures which cost me money. I care about readers
who think I have something to say regarding the topics that are still
relevant. The many visitors who come to the site from a search engine and
take a quick look at one page, or skim several spending a few seconds on
each are of no interest to me; I wish there was a way to keep them away.
To understand how I feel, is easy if you are in
a similar situation, but if not then think about getting robbed. Don't
think of something that you can go to a store and replace, think of
someone stealing your family photo albums or wedding ring. That gets
closer but still isn't quite right. In one sense I still have all the pages,
but in another sense, everyone who has seen my pages on someone else's
site, with no mention of George Shaffer or GeodSoft, thinks that site
owner wrote the material. It's like
part of your reputation has been stolen or diminished. And if
someone sees both sites, they may wonder who copied who. It's
a very emotional experience.
The other Terms changes are mostly minor. Most remnants
that still referred to
to the multiple mirrored sites I used to host were removed. Collective
work authors have some latitude about where the copyright notice goes.
Fair use commentators and those using just a small amount can now use the
greater of three paragraphs or 500 words rather than the lesser.
One other thing
that is not minor, the web pages have not been updated; the copyright notice
must come from the Terms, not the page being copied. I'll fix this when
Password Generator has gotten a few tweaks. I added a hexadecimal class
and some new patterns, with brief explanations. Just to show that it
could do "passwords" like Steve Gibson's (grc.com) really long random strings, I added
a pre defined pattern that shows the settings necessary to produce these.
Mine could do these four or five years before Steve Gibson's was up but
I never saw the point. These could only be used with cut and paste, and
if you cut out pieces as he suggests, its not really random any more,
since you're most likely to pick pieces that are easiest to remember.
Randomness is greatly overestimated in creating passwords. What you want
are uncrackable passwords, preferably that you can remember, and there are
various approaches to achieving that.
If I thought pure randomness was the most important factor
in passwords, I would never have started with a strongly pattern based password
generator which the original was. With this one you can still build
very pattern oriented passwords, its just that you can select any kind of
pattern you like. Or you can have "sloppy" patterns, or as most of the
newly added pre defined patterns show, no pattern at all. The only thing
that Steve Gibson's "password generator" has that mine doesn't, and I would like,
is SSL. I'm at a hosting
service Zipa.com, that has done an excellent job at bargain prices, but
part of where they make their money is extras, and SSL is an extra, that
would greatly increase my hossting costs.
There is one change that I made to the password generator that
no one will ever see. I changed the random seed algorithm. What I'd used
before was a minor variation on one of the recommended random seed generators
in the Perl documentation. The common recommendations suggest some bit
operation on time and process number. I didn't much like the output of the suggested one, so I
tinkered with it and got what I thought was a much wider and unpredictable
range of seed values. Most of my testing was done on a Windows machine that
had randome but a relatively small range of process IDs. I don't recall testing it on
Linux. Something caused me to take a look at it recently, and I did not
like the results on Linux.
I spent a couple weeks working on new algorithms until
I found one that generates only a few duplicate seeds out of 1 million
and these were always well separated. I got this to work equally well on
an OpenBSD machine and a Linux machine. For those who don't know, Linux generates
process IDs sequentially and BSDs randomly. Windows 6 to 8 years ago generated
IDs fairly randomly but from a tiny set, almost all under 1000. Linux and BSD
go to 32K or 64K. I did an enormous amount of testing and came up with a
somewhat CPU intensive process. Two different ps listings are gziped and
then their contents treated as an array of long integers, and summed. To one
is added the sum of the character values of the UNIQUE_ID generated by the
server (which varies over a fairly narrow range) and the digits of the
remote IP treated as a single integer. To the other is added the sum of
the remote port and the current system time (a 10 digit integer).
The low order 9 bytes of the two sums are XOR-ed and
that is the seed. The full sums were 16 or so digits and there were
definitely patterns in the high order digits. After all a ps listing
can only change so much from one fraction of a second to the next, even if
every ps invocation, and the manipulations of the result change the
next ps listing.
If 999,999,999 is converted to binary it is
11 1011 1001 1010 1100 1001 1111 1111,
which is how the number is dealt with for bitwise operations. With an XOR
of two 30 digit fairly random binary numbers, it is theoretically possible for all digits
to be ones, or alternatively all zeros. Thirty binary ones equal 1,073,741,823.
Thus there are more that 73 million values over one billion.
glance when viewing the decimal seeds, one might think there are way to
many numbers that begin with 1, but of the possible numbers, there are
more than 73 times as many low one billion numbers, than all one
through six digit numbers combined, and more than 7 times as many billion
plus numbers as seven digit numbers. Of the ten digit decimal numbers created
they will all begin with 10. There should be "roughly" equal numbers of
billion plus numbers starting with 100 - 106, and about a third as many
starting with 107. Also nearly 9 tenths of the total population will be
9 digit numbers.
In developing the seed logic I started with short
runs of a 1000 then 10,000 outputs. I wrote a couple Perl scripts to
analyze the distribution of values, of leading digit sequences, and
digits at each digit positions. When the outputs started to look random
at 10,000, I went to 100,000 then 1,000,000 number runs. I performed
about a dozen million number runs. As the logic got more compels, these
took about four days on my slow Linux desktop, and about half that on
an identical OpenBSD system.
I'm not a mathematician, statistician, or cryptographer,
but from what I understand, these seed values appear to be both random
and unpredictable. Every machine should have somewhat different results
since the starting point is a wide form (lots of information) process (ps)
listing. To have even a chance of predicting what kind of numbers should
be produced, you'd need to know the format details of ps for the machine
and have a very good idea of what processes are running. You'd also need
to be intimately familiar with the details of the gzip compression process.
But as with
Heisenberg's uncertainty principle, any attempt to analyze what processes
are running, requires running processes that look at processes and necessarily
change the output of any ps listing, not just while they are running but
because other processes got less CPU time while these analysis programs
were running so any long lasting process stats are somewhat altered.
It's impossible to conceive of any way of anticipating the outputs
without changing the outputs. And if you don't have access to the web
server and an opportunity to closely monitor its environment, how
you could attempt to predict the output is beyond me.
The there are also the extras I added. The web
server's UNIQUE_ID, is obviously an attempt at some randomness based on
some combination of local machine and remote client information.
For each server, some parts of this remain constant, and others
vary with each web page served. The IP and remote port add something
from the client that should be different for each client. The IP will
for at least limited periods of time remain the same for each client
but be different from nearly all others (except when multiple
clients are coming form the same NAT or proxy environment). Port
numbers will increment with each request from a Linux client and
should be random from BSD clients. Time should be the same for
everyone but relentlessly moves forward.
Anyhow, after quite a bit of work, I've come up
with a method for generating high quality seeds. They occupy a much
larger range, appear to be random, and are much less predictable
than any suggested seed algorithm I've seen suggested. Before doing
this, I did look Perl modules that were supposed to provide
cryptographic quality random numbers, and could not install them
because I lacked too much prerequisite software, or appeared to
install them and they simply did not work.
To some extent, the seed is not terribly important
with password.pl. Every different pattern, and many configuration changes
generate completely different results. But if the same seed value is
used with an identical pattern and configuration, the result is the same.
This is to say, though Perl's pseudo random number generator generates
a good approximation of random numbers, it always generates the same
sequence from the same seed. CGI programs start new every
time you access them, or click refresh, or change the parameters,
so the amount of entropy in Perl's random
number generator is limited by the range and quality of seed values.
With almost infinite configuration possibilities, few people will ever
see the same passwords from password.pl.
I think the importance of this can easily be
over rated. Except the shortest simplest patterns when the minimum length
password is created, or occasional fluke dictionary word is created,
no paswords that come out password.pl lend themselves very well to
todays cracking techniques (and you can/should always use Password Evaluator
to check them). What difference does it make if two people
in Washington, D.C., and New York see and choose the same password, or
even two people in Falls Church, VA? The only way this could matter is if
the same cracker, has access to machines that both users have used that
password on, and that both or all of those machines are Windows machines.
The reason I single out Windows machines, is that
all Unix variants use
a "salt" that cause 2048, different password hashes to be generated
from each plain text password. Further, different Unix variants use
different hashing algorithms, and many provide for local selection
and configuration of the hashing algorithm. The chance of two people
with the same password, getting the same hash on the same or different
Unix computers is very small, but is assured on Windows machines.
All Windows machines, at least with the same OS version, produce
the same passwords. Windows passwords should be several characters
longer to compensate for this weakness.
If the very remote possibility that someone may see passwords
that you've seen, concerns you, set the limit
to 20 or 50 or even 100 passwords and you'll surely see passwords no one else has or
will, even with the default pattern and configuration.
I'm both amused and annoyed and by Steve Gibson (grc.com)
calling his passwords
"Perfect," as if there could be such a thing as a perfect password.
It's so naive on one hand, and grossly missinformative on the other. Everyone
has different needs. Steve is confusing mathematical strength with perfection
and forgeting useability. He makes the claim "and the cryptographically-strong
pseudo random number generator we use guarantees that no similar strings will
ever be produced again." No matter how large the universe of possibilities,
even if it's 1 followed by 100 zeros, it's possible, though extremely unlikely,
for the same value to
repeat. And from what I understand, it's not possible to produce
that amount kind of entropy on a computer. He has what is most likely
a single function web server, and probably displays dozens or hundreds of
these an hour, but when you genereate a GnuPG key, gpg has you moving
your mouse randomly for maybe a minute to get the entropy it needs.
Of course the GnuPG key is much longer. Still, I don't see how he can
make these claims. The only guaranteed way I know to
never have a number repeat is to use a progressive sequence like time,
and that becomes very predictable, which is worse than a pseudo random
number that occasionally repeats.
The alternative is to claim the the pseudo random
generator does not depend on a seed and never repeats a sequence of numbers.
That's very hard if not simply impossible to prove. Some very smart people
have developed what were though to be very clever cryptographic algorithms
which were subsequently shown to have gaping holes. I've liked some of
what Steve Gibson has done in the past but I think he's entirely on the
wrong track here. His "passwords" are not perfect, they aren't even useable.
The only way to use one, or even a significant part of
one of Steve's passwords, is cut and paste. That means you have to trust it
to electronic storage. Even if you used the best electronic password "safe,"
what do you protect that with. If you use a password like any of Steve's, you
risk losing all your passwords because you can't remember your master password.
And if you use a password you can be sure to remember, then the "safe"is at
risk, unless you keep it on a second non networked computer with no
connections to anything, but then you can't use cut and paste.
The last place
I want any of my passwords stored, is on a networded hard drive, regardless
of how good or how many firewalls I'm behind, or how good I hope my security is.
This is the main
reason I never use the browser features that "remember" a password for me.
Even the possibility than my bank and credit card company passwords might
be in my browser cache is scary. I think if they showed on the screen as
asterisks, that's how they will appear in the cache, but I'm not sure.
I hope someone finds a use for the new changes, minor
though they are. I while back I got started on a pass phrase generator but
got side tracked. Recently Roger Grimes of InfoWorld made the point that a
10 character all lower case password has the same strength as an 8 character
password, using characters from a 95 character set (actually it's about
eleven and a half characters to match an 8 character password using all
types of characters). This is a very important
point that almost the entire computer
world insists on missing, myself included, until I was shown the light. All
current dictionary attacks depend on one word with sometimes many variations
including added characters.
If you avoid obvious combinations and already known passwords, an 11 character, lower
case password won't fall to a dictionary attack, or to brute force either.
If a 12 character all lower case password is mathematically stronger than
an 8 character password with mixed case, digit(s), and one or more symbols,
which is likely to be easier to remember and type? If the same mathematical
strength can be achieved with one case all letter passwords, why the
industry obsession with diverse character sets?
I'm also thinking about a password generator that makes 10 character and
longer combinations of 2 - 5 character words, with some random characters
- What Was New: Jan. 2001 - Nov. 2003
- What Was New: July - Dec. 2000
- What Was New: April - June 2000
Top of Page -
Copyright © 2000 - 2014 by George Shaffer. This material may be
distributed only subject to the terms and conditions set forth in
These terms are subject to change. Distribution is subject to
the current terms, or at the choice of the distributor, those
in an earlier, digitally signed electronic copy of
http://GeodSoft.com/terms.htm (or cgi-bin/terms.pl) from the
time of the distribution. Distribution of substantively modified
versions of GeodSoft content is prohibited without the explicit written
permission of George Shaffer. Distribution of the work or derivatives
of the work, in whole or in part, for commercial purposes is prohibited
unless prior written permission is obtained from George Shaffer.
Distribution in accordance with these terms, for unrestricted and
uncompensated public access, non profit, or internal company use is