Suppose, now, that SYSTEM1 is an ordinary pre-9/11 baggage scanner, whereas SYSTEM2 is a post-9/11 upgrade, including a “false signal” engine. Whenever SYSTEM2 projects a false image and the operator notices (she informs the system by, say, clicking on the image), the following message pops up: “False alarm: you were being tested!” If no message appears, the operator knows the threat is real. [So, SYSTEM2 makes Sissi more reliable than does SYSTEM1.] Consider, then, the following scenario:Yet, here it seems to me that robust EC (of the sort to which A&A object) actually threatens to offer a rebuttal to Vaessen's case. An epistemologist of the robust EC stripe would probably want to explore that idea that Joseph is also part of Sissi's cognitive system. EC folks roll that way, sometimes. If Joseph is part of Sissi's extended cognitive system, then she gets credit. So, it looks as though Vaesen would be better served siding with A&A and rejecting robust EC.
SISSICASE: Sissi has been a baggage inspector all her life. She used to work with an old-fashioned SYSTEM1, but since 9/11, the airport she is working for introduced a SYSTEM2. Her supervisor Joseph, a cognitive engineer who was actually involved in the design of the device, has informed her how it works (how its operation is almost identical to the operation of the old system). Currently Sissi is inspecting a piece of luggage which contains a bomb. She notices and
forms a true belief regarding the contents of the suitcase. As such, the bomb is intercepted and a catastrophe prevented from happening. (Vaesen, 2010).
Friday, November 19, 2010
Credit Theories of Knowledge claim knowing that P somehow involves deserving epistemic credit for believing that P is true. The heart of Vaesen's paper is the development of a case that is meant to challenge certain versions of this. The case in one in which Sissi knows, but does not deserve credit for believing that p.