Re: Lock Picking Spectrum of Difficulty
The code is hidden in the tumblers. One position opens the lock, another position opens one of these doors...
http://www.youtube.com/xeotech1
(ノಠ益ಠ)ノ彡┻━┻
░░░░░░░░░░░░░
flywheel wrote:-Asking for help from any computer programmers-
Here is to LockMash™, or whatever name it ends up with!
Thanks!
OldddffAASSTT the Spin Master Extraordinaire and American Lock Slayer
Posts: 4412
Joined: Thu Mar 31, 2011 9:16 am
Location: Michigan
Josephus wrote:Subjective and objective measurements are not what they appear to be prima facie. It is somewhat a modern myth that whatever objective is both outside human experience and better than subjective measurements. All that objectivity brings is a standard to measure against and subjectivity does not. How that external objective measure is created becomes troublesome.
Scientific measurements, when not attempting to derive a categorical fact, are not at all objective. How do you define 'difficulty'? What makes you think that any specific feature always increases difficulty for every person? Does a combination of metrics that individually increase difficulty, when combined, actually decrease it overall? Some features that clearly increase difficulty in one model clearly decreases it in another. Further, for some people different features or metrics that appear more secure actually make locks easier for them. How do you know that what metric or weighting you chose is correct? What methodology will it take to create such things? It becomes a 'turtles all the way down' scenario when trying to find an objective way to create an objective measure. You must then account not only for the metrics to be used and how those will be measured, but how those metrics will be chosen, why they will be chosen, how to prove that those measures really are objective (they aren't), and how to prove that they are a viable measurement in the first place. At some point you will hit a limitation of knowledge. At some point some feature is more difficult because it feels more difficult and for no other reason. There is nothing scientifically empirical about that, or is there?
There is no such dichotomy between subjective measurements vs empirical means. A person that really likes cake can test each type of cake available and decide on what sorts of cake he likes best. Then the conclusions found from that subjective testing can be used for future predictions over what new types of cake he would most likely enjoy. I know I do not like things with mushrooms in them. It is entirely subjective on my part. I can use that single data point, 'I do not like mushrooms', to predict and test new things. If something has mushrooms in it, I probably wont like it. Unless they are made of brass.
Even if the limitations of knowledge, subjectivity in methodology, and acceptance of empirical measures based on personal taste, there is the problem of rumination or iteration. So you have a good methodology of choice down, and you have a methodology of testing, a way to store the results, and a way to utilize them in some practical fashion. That still leaves a lack of bounds problem. When do you stop adding metrics? When is the weighting good enough? Where is it accurate enough? How do you measure that accuracy? With metric standards there is always a forced choice in limitation of bounds. There will always be some point where you must say 'that is accurate enough'. A database with 10,000 options on how a lock can be more or less secure isn't at all useful, but it does have the possibility of being more accurate than a metric of one option. Secondly, with changes in an environment things that once were difficult may no longer be. Twenty years ago Medeco would have been ranked higher than other brands that are considered its superior today. Suddenly all those metrics and methodologies have to be thrown out or reworked on an ongoing basis to maintain even remotely the same accuracy. Instead of increasing in accuracy over time (as ranking systems do) metric systems diverge from the mean in unusual, unpredictable ways. This is one of the more significant issues with technical stock analysis. The validity of metrics are moving nearly as fast as the data.
The alternative to this is the basis of the thread already. An evolutionary ranking system. It bypasses the difficulties of what measure should be chosen, why is it chosen, how is it chosen, and the limitation of knowledge problem. It maintains the ability to predict outcomes based on past data. Being able to know when the job is done is the prime factor in completion. With metrics, there is no done. There is no cessation of work. You could continue on from here to the end of time with the project without ever completing it. Ranking systems even get more accurate over time instead of less. Of course a 'hot or not' lock page certainly would be way more fun to use than downloading a relational database, but that is just my subjectivity showing.
elbowmacaroni wrote:I think someone may have read "Zen and the Art of Motorcycle Maintenance" one too many times and is transposing quality into difficulty.
Deadlock wrote:Whereabouts would something like a Squire "Old Fashioned" lever padlock go?
Users browsing this forum: No registered users